[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HFR data via THREDDS



On 3/4/2010 9:24 PM, Rich Signell wrote:
John,

Paul and I are trying to figure out why their TDS 4.1 server seems to
die when ncWMS is turned on.  They have joinExisting aggregations with
16,000 files in them, with a "recheckEvery" set to 20 min.   The ncWMS
points to these aggregation OpenDAP URLS, and is set to refresh every
30 min.
what happens if you leave ncwms out of it? eg do an opendap request similar to what ncwms probably does.
Paul is thinking that maybe it takes more than 30 min to index all
those 16,000 files, and therefore ncWMS never gets to complete.

But for joinExisting aggregations, the metadata is cached so that only
the first time is slow, right?  And even if new files arrive it
doesn't have to scan the whole directory again, does it?
And having ncWMS refreshing every 30 min just means another OpenDAP
request, which should not retrigger a reindexing either, should it?

check your cache directory ( ${tomcat_home}/content/thredds/cacheAged/) unless youve changed it in threddsCOnfig.xml. There should be an xml file, likely very large, that will match that aggregation (probbably uses the path name ?). send it to me. also monitor it as to when it gets updated.

-----------
Aggregation Cache

<AggregationCache>
<dir>/temp/acache/</dir>
<scour>24 hours</scour>
<maxAge>90 days</maxAge>
</AggregationCache>


If you have joinExisting Aggregations, coordinate information will be written to a cache directory specified by dir (choosing a cache directory). If not otherwise set, the TDS will use the ${tomcat_home}/content/thredds/cacheAged/ directory.

Every scour amount of time, any item that hasnt been changed since maxAge time will be deleted. Set scour to -1 to not scour at all.

This cache information is intended to be permanent, it stores coordinate information from each file in the aggregation, so that the file does not have to be opened each time the dataset is opened. If you have large joinExisting aggregations, there will be a very pronounced difference with and without this cache. Unless you have special needs, you can just accept the defaults for this element.

The cache information is updated based on the recheckEvery field in the joinExisting aggregation element.


Is there a doc page on caching in TDS 4.1?   I looked around and found
the doc for ehcache, but not how it's used in TDS 4.1

sorry, that doc is rather out-of-date. also im re-doing caching again. when the dust settles ill try to get some user-level docs made.


Thanks,
Rich


On Thu, Mar 4, 2010 at 10:46 PM, Paul Reuter<address@hidden>  wrote:
Hi Rich,

The TDS takes a long time to index these big aggregations the first time after 
reboot, but after that, it's really quick

I've found this to be true only for a short while - minutes for sure, hours... 
I don't know.  I set the aggregation to recheckEvery 10 minutes.  I don't know 
how this mechanism works, but it might be causing problems.  I want near 
real-time, and I hope that it simply looks for modified dirs, then modified 
files.


     <dataset name="HFRADAR, US West Coast, 500m Resolution, Hourly RTV" 
ID="HFRNet/USWC/500m/hourly/RTV" urlPath="HFRNet/USWC/500m/hourly/RTV">
       <metadata>
         <documentation type="Summary">HFRADAR, US West Coast, 500m Resolution, 
Hourly Combined Total Vectors (RTV)</documentation>
       </metadata>
       <netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2";>
         <aggregation dimName="time" type="joinExisting" recheckEvery="10 min">
           <scan location="/data1/HFRadar/RTV/USWC" subdirs="true"
             olderThan="2 min"
             regExp=".*[0-9]{12}_HFRadar_USWC_500m_rtv_SIO\.nc$"/>
         </aggregation>
       </netcdf>
     </dataset>


-- the aggregation metadata is cached, and new files just get tacked on so you 
don't have to reindex.

How would new files get tacked on without doing a complete directory scan?  
There are so many files on disk that a complete directory scan is what takes a 
long period of time.

This is where the directory structure of motherload comes into play.  They have 
daily directories, so it's possible that TDS doesn't have to rescan everything, 
just the directory's modification time.  The aggregate on motherload is an 
exception, but maybe they're not forcing a stale cache after 15 minutes like I 
am.

So you think ncWMS is somehow not using that cached metadata, but forcing a 
entire scan each time?

I think that ncWMS is asking for a listing of what's available every 30 
minutes, and since TDS has a now-expired cache, has to reindex.



Paul



On Thu, Mar 4, 2010 at 5:10 PM, Paul Reuter<address@hidden>  wrote:
Rich,

After a quick scan of the motherload catalog, I suspect that the reason why 
some THREDDS servers are stable and others aren't is related to indexing.  
Since indexing can be controlled somewhat in configuration, I have to agree 
that it may just be a configuration problem - but! -

Here: http://hfrnet.ucsd.edu:8080/thredds/HFRADAR_USWC_hourly_RTV.html
each is a composite data set.

Here: 
http://motherlode.ucar.edu/thredds/catalog/station/profiler/wind/06min/catalog.html
each is a separate directory entry (daily!)

Here: http://motherlode.ucar.edu/thredds/idd/satellite.html
I found WEST-CONUS aggregate.  Further exploration shows 4033 times available.  
Our 6km USWC has 17061 times.

I had configured ncWMS to look for new data every 30 minutes * 8 different 
OPeNDAP URIs.  It's like performing 8 directory listings with regex in parallel 
every 30 minutes.  It might actually take 30+ minutes to complete a directory 
scan.  I'm guessing here, but I think we suffered from a long-task, small 
time-slot problem.  Since the previous indexing may not have completed before 
the second round was executed, the number of open items increased, and 
re-increased to its demise.

I don't have any concerns about serving data (yet).  My only concern is in the 
data aggregation timing.  Ideally, we would disable interval scans in THREDDS 
and add/update the index manually (somehow), by triggering a 
thredds-add-file-to-index event whenever a file is modified.  This theoretical 
approach would stop the need for directory scans, but I don't think a triggered 
add-one-file method exists.

I've tried inotify on my home server, and the first time around, it complained 
about watching too many files (100k+).  We definitely have more than that on 
hfrnet.  I don't think an inotify strategy would be appropriate (me ruling 
things out before being presented as a possible solution).

Paul


Rich Signell wrote:

Paul,

This is frustrating, since we have several THREDDS servers that seem to have 
problems, but the motherlode server at Unidata

http://motherlode.ucar.edu/thredds/catalog.html

has a HUGE amount of data and aggregations and must handle way more requests 
than HFRNET (or any of our other servers that have problems) and it doesn't 
seem to have issues.  So I'm thinking that it's just some configuration 
problem, not with THREDDS itself.

Any ideas how to track these things down?
-Rich

On Thu, Mar 4, 2010 at 4:41 PM, Paul Reuter<address@hidden>  wrote:
Hi Rich,

We haven't done anything with our THREDDS server since the last time, but then 
again, it hasn't crashed.  ncWMS was not polling our THREDDS service, which 
leads me to believe that THREDDS can't handle a certain amount of use.  I had 
ncWMS indexing 8 OPeNDAP urls every 30 minutes.  8 parallel requests to refresh 
a very large archive.  I would expect that there be some form of error 
prevention in the server(s), but what I expect is often not a priority for 
scientific pursuits.

Long story short: no tweaks, no problems, no ncWMS -- which is fine, since 
ncWMS isn't an essential item.

Paul


Rich Signell wrote:

Paul,

Did you make any more progress on diagnosing the TDS/ncWMS "too many files" 
problem, perhaps following the tips from John Caron?

(I'm guessing you guys got busy with other things and ncWMS is a bit further 
down the priority list, right?)

In not, I'm curious: Has the THREDDS Data Server been reliable since you took 
ncWMS down?

-Rich

On Mon, Feb 1, 2010 at 8:02 AM, Rich Signell<address@hidden>  wrote:
Paul,

In response to the "too many files" problem:  two answers, one from John Caron, 
and one from Heiko Klein.  You might try Heiko's cache settings first for 
threddsConfig.xml
  first and see if that fixes the problem.

@John:  does Heiko's solution make sense to you?

1. John Caron<address@hidden>  previously had this advice:

"Its useful to clear the TDS cache (need remote monitoring):

  https://motherlode.ucar.edu:9443/thredds/admin/debug?Caches/clearCache
  https://motherlode.ucar.edu:9443/thredds/admin/debug?Caches/forceNCCache
  https://motherlode.ucar.edu:9443/thredds/admin/debug?Caches/forceRAFCache

Then look at
    /usr/proc/bin/pfiles [Tomcat Process ID]

to see what files are still open.

if you can do that, send me the output of pfiles."

To do this requires setting up THREDDS to allow monitoring and debugging.  See 
this page:

http://www.unidata.ucar.edu/projects/THREDDS/tech/tutorial/TDSMonitoringAndDebugging.html

2.  From Heiko Klein<address@hidden>
I haven't enabled remote monitoring yet, but the problem disappeared since I 
set the low-watermark of the files-cache to 0, so after 10min the cache is 
completely cleared.

I think the culprit are files being removed from another cleanup-process, while 
still being in the cache. (i.e. forecast-files).

With a cache of only 10min, instead of 200 files forever, the problem is not 
completely solved, but doesn't seem to appear any longer (stable for a week 
yet). And we didn't recognize any performance degration.
Please let us know what you find out (me&  address@hidden).

-Rich


On Sun, Jan 31, 2010 at 9:04 PM, Paul Reuter<address@hidden>  wrote:
Our THREDDS server went down after a host of "too many open files" messages 
filled up the disk.  I'd like to attach the tomcat logs, but they're 2.3 GB.  The 
important stuff is included here.

Rich, It looks like the server ran into problems even after your restart.  The 
logs started to fill up, and we encountered additional fall-out from that.  
I've since killed THREDDS and ncWMS, cleared up the disk partition and 
restarted both tomcat instances (using the bin scripts).  ncWMS didn't restart 
correctly.  Just so we're on the same page, after you left, I re-did everything 
(for muscle memory).  I wget'd the source, compiled (where necessary), etc.  It 
was as clean of an installation as possible without any legacy files or 
copy-paste.

Jim, I've attached one of our HFRADAR catalog files, as well as the "master" 
catalog.

While I was trying to bring ncWMS back up today, I accidentally undeployed it 
and wiped out all of my customizations.  I'd rather not repopulate that list of 
opendap servers at this time.  I suspect that ncWMS might have brought down our 
THREDDS server from it's reloading policies.

Basically, catalina.out is a few million lines of:

Jan 31, 2010 7:51:04 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: Too many open files
   at java.net.PlainSocketImpl.socketAccept(Native Method)
   at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
   at java.net.ServerSocket.implAccept(ServerSocket.java:453)
   at java.net.ServerSocket.accept(ServerSocket.java:421)
   at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
   at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:317)
   at java.lang.Thread.run(Thread.java:619)

The last occurance being at  Jan 31, 2010 7:52:04 PM (UTC)... probably when 
/usr filled up to 100% disk usage.
I was able to extract the logs, below is the relevant change of log events:

Jan 27, 2010 4:24:01 AM org.apache.jk.common.MsgAjp processHeader
SEVERE: BAD packet signature 32925
Jan 27, 2010 4:24:06 AM org.apache.jk.common.ChannelSocket receive
WARNING: can't read body, waited #259
Jan 27, 2010 4:24:06 AM org.apache.jk.common.ChannelSocket processConnection
WARNING: Closing ajp connection -1
Jan 28, 2010 7:11:05 PM org.apache.catalina.core.StandardContext reload
INFO: Reloading this Context has started
log4j:ERROR LogMananger.repositorySelector was null likely due to error in 
class reloading, using NOPLoggerRepository.
log4j:WARN No appenders could be found for logger 
(org.springframework.util.ClassUtils).
log4j:WARN Please initialize the log4j system properly.
TdsConfigContextListener.contextInitialized(): start.
Jan 28, 2010 7:17:48 PM org.geotools.util.WeakCollectionCleaner remove
WARNING: NullPointerException
java.lang.NullPointerException
   at 
org.geotools.referencing.cs.DefaultCoordinateSystemAxis.hashCode(DefaultCoordinateSystemAxis.java:1250)
   at org.geotools.referencing.cs.AbstractCS.hashCode(AbstractCS.java:642)
   at org.geotools.referencing.crs.AbstractCRS.hashCode(AbstractCRS.java:178)
   at 
org.geotools.referencing.crs.AbstractSingleCRS.hashCode(AbstractSingleCRS.java:173)
   at 
org.geotools.referencing.crs.DefaultGeographicCRS.hashCode(DefaultGeographicCRS.java:185)
   at org.geotools.util.WeakHashSet.rehash(WeakHashSet.java:212)
   at org.geotools.util.WeakHashSet.removeEntry(WeakHashSet.java:166)
   at org.geotools.util.WeakHashSet.access$000(WeakHashSet.java:55)
   at org.geotools.util.WeakHashSet$Entry.clear(WeakHashSet.java:96)
   at org.geotools.util.WeakCollectionCleaner.run(WeakCollectionCleaner.java:93)
Exception in thread "Timer-10" java.lang.IllegalStateException: DiskCache2: not 
a directory or I/O error on dir=/data/tmp/thredds/wcsCache
   at ucar.nc2.util.DiskCache2.cleanCache(DiskCache2.java:263)
   at ucar.nc2.util.DiskCache2$CacheScourTask.run(DiskCache2.java:300)
   at java.util.TimerThread.mainLoop(Timer.java:512)
   at java.util.TimerThread.run(Timer.java:462)
Exception in thread "Timer-12" java.lang.IllegalStateException: DiskCache2: not 
a directory or I/O error on dir=/data/tmp/thredds/ncSubsetCache
   at ucar.nc2.util.DiskCache2.cleanCache(DiskCache2.java:263)
   at ucar.nc2.util.DiskCache2$CacheScourTask.run(DiskCache2.java:300)
   at java.util.TimerThread.mainLoop(Timer.java:512)
   at java.util.TimerThread.run(Timer.java:462)
log4j:ERROR LogMananger.repositorySelector was null likely due to error in 
class reloading, using NOPLoggerRepository.
Jan 31, 2010 12:20:21 PM org.geotools.util.WeakCollectionCleaner remove
WARNING: NullPointerException
java.lang.NullPointerException
   at 
org.geotools.referencing.cs.DefaultCoordinateSystemAxis.hashCode(DefaultCoordinateSystemAxis.java:1250)
   at org.geotools.referencing.cs.AbstractCS.hashCode(AbstractCS.java:642)
   at org.geotools.referencing.crs.AbstractCRS.hashCode(AbstractCRS.java:178)
   at 
org.geotools.referencing.crs.AbstractSingleCRS.hashCode(AbstractSingleCRS.java:173)
   at 
org.geotools.referencing.crs.DefaultGeographicCRS.hashCode(DefaultGeographicCRS.java:185)
   at org.geotools.util.WeakHashSet.rehash(WeakHashSet.java:212)
   at org.geotools.util.WeakHashSet.removeEntry(WeakHashSet.java:166)
   at org.geotools.util.WeakHashSet.access$000(WeakHashSet.java:55)
   at org.geotools.util.WeakHashSet$Entry.clear(WeakHashSet.java:96)
   at org.geotools.util.WeakCollectionCleaner.run(WeakCollectionCleaner.java:93)
log4j:WARN No appenders could be found for logger 
(org.springframework.util.ClassUtils).
log4j:WARN Please initialize the log4j system properly.
TdsConfigContextListener.contextInitialized(): start.
Exception in thread "Timer-15" java.lang.IllegalStateException: DiskCache2: not 
a directory or I/O error on dir=/data/tmp/thredds/wcsCache
   at ucar.nc2.util.DiskCache2.cleanCache(DiskCache2.java:263)
   at ucar.nc2.util.DiskCache2$CacheScourTask.run(DiskCache2.java:300)
   at java.util.TimerThread.mainLoop(Timer.java:512)
   at java.util.TimerThread.run(Timer.java:462)
Jan 31, 2010 3:14:00 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: Too many open files
   at java.net.PlainSocketImpl.socketAccept(Native Method)
   at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
   at java.net.ServerSocket.implAccept(ServerSocket.java:453)
   at java.net.ServerSocket.accept(ServerSocket.java:421)
   at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
   at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:317)
   at java.lang.Thread.run(Thread.java:619)
Jan 31, 2010 3:14:00 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: Too many open files
   at java.net.PlainSocketImpl.socketAccept(Native Method)
   at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
   at java.net.ServerSocket.implAccept(ServerSocket.java:453)
   at java.net.ServerSocket.accept(ServerSocket.java:421)
   at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
   at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:317)
   at java.lang.Thread.run(Thread.java:619)

This file stopped abruptly, due to disk usage. The last log events are:

Jan 31, 2010 7:52:04 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketExceptio[preuter@hfrnet tomcat-logs]$ gunzip -c 
catalina.2010-01-31.log.gz | tail -20
         at java.net.PlainSocketImpl.socketAccept(Native Method)
         at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
         at java.net.ServerSocket.implAccept(ServerSocket.java:453)
         at java.net.ServerSocket.accept(ServerSocket.java:421)
         at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
         at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:317)
         at java.lang.Thread.run(Thread.java:619)
Jan 31, 2010 7:52:04 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: Too many open files
         at java.net.PlainSocketImpl.socketAccept(Native Method)
         at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
         at java.net.ServerSocket.implAccept(ServerSocket.java:453)
         at java.net.ServerSocket.accept(ServerSocket.java:421)
         at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
         at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:317)
         at java.lang.Thread.run(Thread.java:619)
Jan 31, 2010 7:52:04 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketExceptio

localhost.2010-01-31.log reveals that opendap threw an exception at 07:00:02 PM 
(UTC) (well after the catalina.out errors began, and before they ended).

INFO: Shutting down log4j
Jan 31, 2010 12:19:28 PM org.apache.catalina.core.ApplicationContext log
INFO: Closing Spring root WebApplicationContext
Jan 31, 2010 12:20:21 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Jan 31, 2010 12:20:21 PM org.apache.catalina.core.ApplicationContext log
INFO: Set web app root system property: 'webapp.root' = 
[/usr/local/tomcat-thredds/webapps/thredds/]
Jan 31, 2010 12:20:21 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing log4j from 
[/usr/local/tomcat-thredds/webapps/thredds/WEB-INF/log4j.xml]
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'root'
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'catalogService'
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'catalogGen'
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'dqc'
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'cdmRemote'
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'admin'
Jan 31, 2010 12:20:22 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'wms'
Jan 31, 2010 7:00:02 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet Opendap threw exception
java.lang.NullPointerException
   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:302)
   at 
org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
   at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
   at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   at org.apache.log4j.Category.callAppenders(Category.java:206)
   at org.apache.log4j.Category.forcedLog(Category.java:391)
   at org.apache.log4j.Category.log(Category.java:856)
   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:300)
   at thredds.server.opendap.OpendapServlet.doGet(OpendapServlet.java:152)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at thredds.servlet.filter.CookieFilter.doFilter(CookieFilter.java:54)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
thredds.servlet.filter.RequestQueryFilter.doFilter(RequestQueryFilter.java:121)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
thredds.servlet.filter.RequestPathFilter.doFilter(RequestPathFilter.java:105)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
   at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
   at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433)
   at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
   at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
   at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
   at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
   at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
   at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
   at java.lang.Thread.run(Thread.java:619)
Jan 31, 2010 7:00:02 PM org.apache.catalina.core.ApplicationDispatcher invoke
SEVERE: Servlet.service() for servlet jsp threw exception
org.apache.jasper.JasperException: File "/WEB-INF/jsp/errorpages/500.jsp" not 
found
   at 
org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:51)
   at 
org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:409)
   at 
org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:116)
   at org.apache.jasper.compiler.JspUtil.getInputStream(JspUtil.java:849)
   at 
org.apache.jasper.xmlparser.XMLEncodingDetector.getEncoding(XMLEncodingDetector.java:108)
   at 
org.apache.jasper.compiler.ParserController.determineSyntaxAndEncoding(ParserController.java:348)
   at 
org.apache.jasper.compiler.ParserController.doParse(ParserController.java:207)
   at 
org.apache.jasper.compiler.ParserController.parseDirectives(ParserController.java:120)
   at org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:165)
   at org.apache.jasper.compiler.Compiler.compile(Compiler.java:332)
   at org.apache.jasper.compiler.Compiler.compile(Compiler.java:312)
   at org.apache.jasper.compiler.Compiler.compile(Compiler.java:299)
   at 
org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:586)
   at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317)
   at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
   at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646)
   at 
org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:438)
   at 
org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374)
   at 
org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302)
   at 
org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:416)
   at 
org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:343)
   at 
org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:287)
   at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142)
   at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
   at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
   at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
   at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
   at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
   at java.lang.Thread.run(Thread.java:619)
Jan 31, 2010 7:00:02 PM org.apache.catalina.core.StandardHostValve custom
SEVERE: Exception Processing ErrorPage[errorCode=500, 
location=/WEB-INF/jsp/errorpages/500.jsp]
org.apache.jasper.JasperException: File "/WEB-INF/jsp/errorpages/500.jsp" not 
found
   at 
org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:51)
   at 
org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:409)
   at 
org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:116)
   at org.apache.jasper.compiler.JspUtil.getInputStream(JspUtil.java:849)
   at 
org.apache.jasper.xmlparser.XMLEncodingDetector.getEncoding(XMLEncodingDetector.java:108)
   at 
org.apache.jasper.compiler.ParserController.determineSyntaxAndEncoding(ParserController.java:348)
   at 
org.apache.jasper.compiler.ParserController.doParse(ParserController.java:207)
   at 
org.apache.jasper.compiler.ParserController.parseDirectives(ParserController.java:120)
   at org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:165)
   at org.apache.jasper.compiler.Compiler.compile(Compiler.java:332)
   at org.apache.jasper.compiler.Compiler.compile(Compiler.java:312)
   at org.apache.jasper.compiler.Compiler.compile(Compiler.java:299)
   at 
org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:586)
   at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317)
   at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
   at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646)
   at 
org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:438)
   at 
org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374)
   at 
org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302)
   at 
org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:416)
   at 
org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:343)
   at 
org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:287)
   at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142)
   at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
   at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
   at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
   at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
   at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
   at java.lang.Thread.run(Thread.java:619)
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring FrameworkServlet 'catalogService'
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring FrameworkServlet 'root'
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring FrameworkServlet 'catalogGen'
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring FrameworkServlet 'cdmRemote'
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring FrameworkServlet 'dqc'
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring FrameworkServlet 'admin'
Jan 31, 2010 10:36:39 PM org.apache.catalina.core.ApplicationContext log
INFO: Destroying Spring Frame

Logs out -
Paul



Rich Signell wrote:

Lisa, Paul,&  Mark,

To see if I could replicate Jim's problem, I visited the ncWMS/Godiva2
page for the HFRNET server
http://hfrnet.ucsd.edu:8480/ncWMS/godiva2.html
and all the datasets were showing errors.

So then I checked the HFRNET THREDDS server this morning, and all the
datasets seemed to be reporting Server Error 500 when I clicked on the
OPeNDAP access.

Just to see if the problems would clear up, I stopped and started
THREDDS from the Tomcat GUI admin page (Paul, I remembered the
password -- I guess it was a good one!)    It took a while to shutdown
(perhaps 30 seconds).

@Paul:  I'd really like to get to the bottom of this problem -- and
I'm sure John Caron from Unidata would be willing to help.  Do the
THREDDS logs from this morning reveal the problem?  (I probably
restarted about 7:15 EST).

After the restart, the OPeNDAP access seemed to work okay again, and
when I went to the ncWMS admin page
http://hfrnet.ucsd.edu:8480/ncWMS/admin/index.jsp
and did a "save configuration" to load the data again to ncWMS, all
the datasets showed "ready", and Godiva 2
http://hfrnet.ucsd.edu:8480/ncWMS/godiva2.html
seemed to be working fine (see attached screen grab).

Paul or Mark could supply you with the final THREDDS catalogs, but
here's what we started with (this is before we corrected the
cut-n-pasted metadata, but the dataset and scan info are correct)

-Rich

On Thu, Jan 28, 2010 at 4:26 PM, Lisa Hazard<address@hidden>  wrote:


Hi Jim,

Hmmm... I don't think we did any significant testing with clients.  This is
helpful to know.  If you are able to track down the issues and let us know,
that would be really helpful.  We are definitely not expert TDS
users/servers.  I don't know what Rich used initially for the catalog page.
  I think he just used a copy of something that he had on his system and we
modified it from there.  I don't recall hearing of a radars.xml template.  I
was gone on the second (coding) day at San Clemente so really Mark and Paul
took over after that.  I'll touch base with them, but perhaps Rich can
answer your second question.
~that's a shame you have to miss the DIF calls... I'm sure we can pick a new
day/time!

Best,
Lisa

Scripps Institution of Oceanography
Operations Manager, Coastal Observing R&D Center
9500 Gilman Drive               phone:  858-822-2873
M/C 0213                fax:       858-822-1903
La Jolla, CA 92093-0213 email:    address@hidden



James T. Potemra wrote:


Hi Lisa:

Thanks for the  update.  I was able to read in a sample file with some
small problems, e.g., in matlab I get a curious error:

   The OPeNDAP server returned the following message:
   Forbidden: Contact the server administrator.

In the newer version of GrADS I get a core dump, but I think that has
something to do with the netcdf format; older GrADS versions work ok.
  Anyway, I will try and track these issues down and let you know.  I should
be able to add this to the traj app. By the way, do you know what Rich used
for the TDS catalog page for HFR (the install package comes with a
"radars.xml")?  Thanks again,

Jim

PS I send a message to Rob R. a month ago about the bi-weekly DIF-conf
calls; I'm teaching this semester on Wed/Fri morning, so I can't make those
calls any more.


Lisa Hazard wrote:


Hi Jim,

I greatly apologize for my delayed response.  Yes, Rich was out the 2nd
week of January to help us install TDS for HFR.  It went well and we were
able to get an initial server installed.  Paul then had to take additional
time to complete and get things operational.  At this point, I think we are
fairly stable.  We haven't "advertised" this yet, but plan to do so in the
near future.

THREDDS Server
http://hfrnet.ucsd.edu:8080/thredds/catalog.html

Installation notes are posted on the scratch wiki:
http://scratch.ucsd.edu/wiki/

Tomcat: http://scratch.ucsd.edu/wiki/notes/tomcat/start
THREDDS: http://scratch.ucsd.edu/wiki/notes/thredds/start
ERDDAP: http://scratch.ucsd.edu/wiki/notes/erddap/start


~Do you think your trajectory code could be written to use this output?
http://oos.soest.hawaii.edu/google_maps/trajk2.html

Best,
Lisa

Scripps Institution of Oceanography
Operations Manager, Coastal Observing R&D Center
9500 Gilman Drive        phone:  858-822-2873
M/C 0213        fax:       858-822-1903
La Jolla, CA 92093-0213    email:    address@hidden


James T. Potemra wrote:


Hi Lisa:

Hope you're off to a great new year.  A few weeks back during a DIF
conference call you mentioned that Rich Signell was coming out to setup
THREDDS capabilities for HFR data.  I'm really interested in this since we
now have one (hopefully two more very soon) HFR site running.  I tried
getting swath data into TDS but was unsuccessful.  Anyway, just curious
about when this will happen and what the results are.  Thanks,

Jim




________________________________


--
Dr. Richard P. Signell   (508) 457-2229
USGS, 384 Woods Hole Rd.
Woods Hole, MA 02543-1598



--
Dr. Richard P. Signell   (508) 457-2229
USGS, 384 Woods Hole Rd.
Woods Hole, MA 02543-1598



--
Dr. Richard P. Signell   (508) 457-2229
USGS, 384 Woods Hole Rd.
Woods Hole, MA 02543-1598



--
Dr. Richard P. Signell   (508) 457-2229
USGS, 384 Woods Hole Rd.
Woods Hole, MA 02543-1598



--
Dr. Richard P. Signell   (508) 457-2229
USGS, 384 Woods Hole Rd.
Woods Hole, MA 02543-1598