John,
I am trying to access a really large NetCDF file (~3GB), but
it results in error as shown below.
uri='http://www.gri.msstate.edu/rsearch_data/nopp/fvcom_2.5gb.ncml';
>>>>
GridDataset gds = GridDataset.open(uri);
?? Java exception occurred:
java.io.IOException: Server has malformed Content-Length header
at
ucar.unidata.io.http.HTTPRandomAccessFile.<init>(HTTPRandomAccessFile.java:110)
...........
<<<<
In 'HttpRandomAccessFile.java', I see that this error is due to the
'content-length' in http header
being parsed as 'Integer'. So, any file size exceeding the 32bit
representation will have this problem.
>>>> nj2.2.22
public HTTPRandomAccessFile(String url, int bufferSize) throws
IOException {
.....
head = method.getResponseHeader("Content-Length");
.......
total_length = Integer.parseInt(head.getValue());
} catch (NumberFormatException e) {
throw new IOException("Server has malformed Content-Length header");
}
<<<<
Is there any strong reason the 'Content-Length' cannot be parsed as
'Long' to accommodate file size > 2.1GB ?
My internal tests shows that changing the code to parse as 'long' surely
solves the problem, but I am not sure
if I am setting myself for some unforeseen disaster dealing with other
aspects of netcdf-java API.
Will appreciate your valuable input,
thanks
Sachin.
--
Sachin Kumar Bhate, Research Associate
MSU-High Performance Computing Collaboratory, NGI
John C. Stennis Space Center, MS 39529
http://www.northerngulfinstitute.org/ <http://www.northerngulfinstitute.org/>