Hi,
Roy Mendelssohn wrote:
See http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#Large%20File%20Support0.
It has been possible since netcdf3.6
with severe bugs, as posted by me to this mailing list on 09/12/2007
(i.e. [netcdfgroup] possible bug in 3.6.2 with variables > 4GB).
For variable types NC_BYTE, NC_CHAR and NC_SHORT it crashes badly.
I have actually never received an answer to my bug report, don't know
why nobody cared :-/
BTW: below is an attached copy of the old mail
Cheers,
Mario
----
I hope this bug hasn't been filed before, I couldn't find it
on the list via a quick check.
This concerns writing large (>4GB) variables in netcdf with
large-file-support. In netcdf-3.6.2/libsrc/var.c around line
400 it reads:
if( varp->xsz <= X_UINT_MAX / product )
/* if integer multiply will not overflow */
{
varp->len = product * varp->xsz;
} else {
/* OK for last var to be "too big", indicated by this special len */
varp->len = X_UINT_MAX;
}
switch(varp->type) {
case NC_BYTE :
case NC_CHAR :
case NC_SHORT :
if( varp->len%4 != 0 )
{
varp->len += 4 - varp->len%4; /* round up */
/* *dsp += 4 - *dsp%4; */
}
break;
default:
/* already aligned */
break;
}
In the case of NC_BYTE, NC_CHAR and NC_SHORT, varp->len will end
up being X_UINT_MAX+1 instead of X_UINT_MAX. This in turn causes
an assertion when calling ncx_put_size_t later:
ncx.c:1812: ncx_put_size_t: Assertion `*ulp <= 4294967295U' failed.
I could not think about a useful fix despite qualifying the
rounding with the product-overflow (just moved the else-part
further down):
if( varp->xsz <= X_UINT_MAX / product )
/* if integer multiply will not overflow */
{
varp->len = product * varp->xsz;
switch(varp->type) {
case NC_BYTE :
case NC_CHAR :
case NC_SHORT :
if( varp->len%4 != 0 )
{
varp->len += 4 - varp->len%4; /* round up */
/* *dsp += 4 - *dsp%4; */
}
break;
default:
/* already aligned */
break;
}
} else {
/* OK for last var to be "too big", indicated by this special len */
varp->len = X_UINT_MAX;
}
I hope this bug report is useful. If you can send me a better
patch against netcdf-3.6.2 I would highly appreciate it.
Cheers,
Mario Emmenlauer
----
On Feb 26, 2008, at 1:11 PM, Joe Sirott wrote:
Hi,
In the "classic" netCDF file format (netCDF-3), a variable without a
record dimension cannot be larger than 2GB. This limitation has been
giving me a lot of headaches lately. I know that netCDF-4 is supposed to
solve this problem, but there are a number of reasons why netCDF-4 is
not a good option for me (no Java write support, for one).
I could be missing something, but it seems like a very small change to
the netCDF-3 file format would solve this problem. The only requirement
would be changing the variable size info in the netCDF header from a 32
bit to a 64 bit int (and, of course, updating the version info in the
header).
I'm guessing that I'm not the only netCDF user who has run into this
problem and who is also reluctant to move to netCDF-4. Any possibility
that Unidata could make these changes?
Cheers,
Joe S.
_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx <mailto:netcdfgroup@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
**********************
"The contents of this message do not reflect any position of the U.S.
Government or NOAA."
**********************
Roy Mendelssohn
Supervisory Operations Research Analyst
NOAA/NMFS
Environmental Research Division
Southwest Fisheries Science Center
1352 Lighthouse Avenue
Pacific Grove, CA 93950-2097
e-mail: Roy.Mendelssohn@xxxxxxxx <mailto:Roy.Mendelssohn@xxxxxxxx> (Note
new e-mail address)
voice: (831)-648-9029
fax: (831)-648-8440
www: http://www.pfeg.noaa.gov/
"Old age and treachery will overcome youth and skill."
------------------------------------------------------------------------
_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/