Re: [thredds] Running out of memory on a joinExisting aggregation data set

Hi John,

I find it very encouraging that you can run with 20,000 files.

We've been running with 1024 MB allocated to the JVM and haven't had anymore trouble, so I think the memory was a problem. On investigating further, it appears that no memory was specified before, so the trouble was occurring with a 64 MB JVM. The 256 MB was for a tomcat instance serving a different purpose.

I'll leave the individual files alone since abstracting out the common attributes won't save us that much.

Thanks for your help John and Roy.

-Ken

John Caron wrote:
Hi Ken:


Ken Tanaka wrote:
John and Roy,

It looks like the current Xmx memory setting is 256 MB for the tomcat
process. I'll arrange to have the memory increased,
...
Im currently testing with 20,000 files, which it is handling ok. There is a small amount of memory retained for each file, which i am currently trying to minimize. I think you'll find that if you can give the server 1M or more, your case will work fine. You will also see significant performance by using java 1.6.
Meanwhile, i will try to keep things as sleek as possible. We will eventually 
get object caching working, to ensure there will be no memory limits at all.

New development is being done only on the 4.0 branch. Its stable enough to 
start testing with it if you want to compare performance, etc. Look for it on 
the TDS home page:

  http://www.unidata.ucar.edu/projects/THREDDS/tech/TDS.html

We were thinking of moving a large number of Global Attributes
(metadata) out of the files and into an NcML file since most of it is
the same (constant) in all the files. We felt that it made sense to
place the aggregation and attributes into the NcML file rather than
clutter the catalog.xml in this case. Will this save memory to abstract
out the repeated Global Attributes and provide them in a single NcML
wrapper?

There wont be any real memory savings by factoring out attributes into NcML. 
There will be some minor speedup if you actually remove them from the netcdf 
files. For now, I wouldn't try to do such optimizations.

Let me know if your problem goes away with a larger memory size.
_______________________________________________
thredds mailing list
thredds@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/

--
= Enterprise Data Services Division ===============
| CIRES, National Geophysical Data Center / NOAA  |
| 303-497-6221                                    |
= Ken.Tanaka@xxxxxxxx =============================



  • 2009 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the thredds archives: