Re: [netcdfgroup] HDF error, and now what?

Thank you for your help, Russ and David.

1) Re: log level
This is a great tip. I saw reference to the nf_set_log_level function on a 
listserv, but I could not find the symbol in the library. Somehow I missed the 
build option.

2) Re: corrupted file
Yes, some bits in the file seem to be corrupted. I can re-download the file, 
and compress it, and it appears fixed. But it seems that many data values in 
the file are not corrupted, and it is a pain to have to re-download the whole 
file (about 24 min per file). The somewhat worrisome part is that nccopy will 
happily compress the original netcdf without throwing an error. It would be 
nice if the library provided a little more information for me to read around 
the bad bits. I suppose that is what I do now, but I wonder if I am throwing 
out more than necessary.

Regards,

Ed

On Jul 24, 2014, at 10:29 AM, Russ Rew <russ@xxxxxxxxxxxxxxxx> wrote:

> Ed,
> 
> Getting at the HDF5 error messages is possible, but not easy:
> 
> If you build ​the ​netCDF ​C library ​with​ the​ --enable-logging​ configure 
> option​, rebuild the netCDF Fortran library using that C library (or maybe it 
> will just work if its a shared library) , then ​put the following ​in your 
> code ​ ​somewhere before the HDF error occurs
> 
> ​  ​n​f​_set_log_level(3)
> 
> ​you will get ​lots of output ​, including various messages on the HDF5 error 
> stack when the error occurs​. ​If you change the "3" to a "1" ​, you​ get 
> less output, or to a ​"​5​", you​ get more.
> 
> ​Sometimes this can be helpful in debugging a problem in the HDF5 layer or a 
> bug in how it's called.
> 
> --Russ​
> 
> 
> 
> On Thu, Jul 24, 2014 at 10:44 AM, David W. Pierce <dpierce@xxxxxxxx> wrote:
> Seems like the null hypothesis is simply that the files are corrupted,
> perhaps because of disk or copying errors. A few bad values in many
> terabytes of data seem consistent with that at first glance. Have you
> tried remaking the source files on a separate disk, if that is
> possible? Or are there other copies elsewhere that can be checksummed
> and compared to your copy?
> 
> Or perhaps I'm misunderstanding, and your basic point is that the HDF
> layer should cope with disk errors more gracefully? I don't know how
> the compression works in detail, but I've always imagined that once
> you get a error in a compressed file you can't recover the subsequent
> data very easily. If you *could* recover data subsequent to an error,
> that would imply some level of repeated information, which presumably
> the compression is supposed to remove in the first place.
> 
> Regards,
> 
> --Dave
> 
> 
> On Thu, Jul 24, 2014 at 9:26 AM, ezaron <ezaron@xxxxxxx> wrote:
> >
> > Thanks, Rob.
> >
> > The frustrating thing about it is that the error message is completely
> > opaque.
> >
> > I wrap my netcdf calls in a function that reports the error. If the return
> > code is not equal to NF_NOERR, then I call nf_strerror. The error occurs on
> > a call to nf_get_vara_int, which returns a status of -101. The nf_strerror
> > reports this as
> > "NetCDF: HDF error"
> >
> > In case you (or a netcdf developer) can take a look, I have posted an
> > example file here:
> >
> > http://maki.cee.pdx.edu/~ezaron/NETCDF4examplefailure/ncom_relo_amseas_2010072600_t015.nc4
> >
> > The error occurs with a call like the following:
> > nf_get_vara_int(ncid,varid,istart4,icount4,tmp)
> > istart4 = (384,371,1,1)
> > icount4 = (2,2,40,1)
> > varid corresponds to the "water_temp" variable
> >
> > It is a very strange error. If I open the file in ncview, for example, I can
> > view the other variables without problems. But if I try to view the
> > water_temp variable, ncview crashes and reports "NetCDF: HDF error" also.
> >
> > ncks reports the same error when I try to extract water_temp.
> >
> > But if use "ncdump -v water_temp", the values are extracted from the file
> > without any problems up to a certain point when the program crashes,
> > reporting:
> > NetCDF: HDF error
> > Location: file vardata.c; line 479
> >
> > I see this bug mentioned on the ncks release notes:
> > netCDF #HZY-708311
> > http://www.unidata.ucar.edu/mailing_lists/archives/netcdfgroup/2014/msg00045.html
> > and wonder if it is the same problem.
> >
> > -Ed
> >
> >
> >
> >
> >
> > --
> > View this message in context: 
> > http://netcdf-group.1586084.n2.nabble.com/HDF-error-and-now-what-tp7575504p7575506.html
> > Sent from the NetCDF Group mailing list archive at Nabble.com.
> >
> > _______________________________________________
> > netcdfgroup mailing list
> > netcdfgroup@xxxxxxxxxxxxxxxx
> > For list information or to unsubscribe,  visit: 
> > http://www.unidata.ucar.edu/mailing_lists/
> 
> 
> 
> --
> David W. Pierce
> Division of Climate, Atmospheric Science, and Physical Oceanography
> Scripps Institution of Oceanography, La Jolla, California, USA
> (858) 534-8276 (voice)  /  (858) 534-8561 (fax)    dpierce@xxxxxxxx
> 
> _______________________________________________
> netcdfgroup mailing list
> netcdfgroup@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe,  visit: 
> http://www.unidata.ucar.edu/mailing_lists/
> 
> 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Edward D. Zaron
Research Assistant Professor
Department of Civil and Environmental Engineering
Portland State University
Portland, OR 97207-0751
Phone: (503)-725-2435
FAX: (503)-725-5950
ezaron@xxxxxxx
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  • 2014 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: