Tennessee Leeuwenburg wrote:
Hi guys,
I am looking at serving some very large files through thredds. I had
found through trial-and-error that on one particular server, somewhere
between 60Mb and 300Mb thredds stopped being able to start serving up
files before the client timed out.
Unfortunately, this machine services a number of people so I had to do
my testing elsewhere. I have a 579Mb NetCDF file on my desktop
machine, and tried doing a local test with this, installing my file
server and the thredds server on it. What I found was that the thredds
server was running out of heap space. Now, I know I can alter the
amount of heap space the JVM has available somehow, and that's what
I'll try next, but I don't know whether that's a reliable solution. I
don't really know how much memory thredds needs on top of the size of
the file it's trying to serve, and of course multiple incoming
requests might also affect this - I don't know how tomcat deals with
that kind of thing in terms of creating new JVM instances etc.
Here is the error from catalina.out:
DODServlet ERROR (anyExceptionHandler): java.lang.OutOfMemoryError:
Java heap space
requestState:
dataset: 'verylarge.nc'
suffix: 'dods'
CE: ''
compressOK: false
InitParameters:
maxAggDatasetsCached: '20'
maxNetcdfFilesCached: '100'
maxDODSDatasetsCached: '100'
displayName: 'THREDDS/DODS Aggregation/NetCDF/Catalog Server'
java.lang.OutOfMemoryError: Java heap space
So my question is: what's the best way to make a reliable server than
can serve these large files?
Cheers,
-Tennessee
Its the size of the data request that determines the memory needeed, not
the file per se.
unfortunately, we currently have to bring the whole request into memory
before sending it out. eventually we will modify both netcdf-java and
netcdf-dods to allow data to be streamed. however i doubt we can get to
it before the end of the year.
meanwhile your only recourse is to increase java heap space. you could
also modify the code to test for data request size and reject anything
thats too big.