PS. I have tested version 4.1.2beta1. When chunking is on the program crashes at 75 (!) items: it takes > 3 GB memory. So, this a move in wrong direction. Regards, Sergei -----Original Message----- From: Shibaev, Sergei Sent: 08 November 2010 16:17 To: 'netcdfgroup@xxxxxxxxxxxxxxxx' Subject: FW: [netcdfgroup] Netcdf-4.1.1 fault Hello NetCDF Group, In previous mail I have reported Netcdf-4.1.1 fault. Further investigation shows that this is common NetCDF API problem. I make a simple test program; it writes a requested number of small identical items (1000 ints) into NetCDF file. The items are stored in groups by 10. All items uses the same dummy dimension (may be this is the root of the problem) in the root group. For all testing I use HDF5-1.8.4patch1. The plot attached shows file writing time as a function of number of items. The traces are marked as: 401 - version 4.0.1 without chunking; 401C - with chunking: chunk size = 1000; 411 - version 4.1.1 without chunking; 411C - with chunking. H5 is time of HDF5 file writing; the file contains the same information except dummy dimension: the item size is stored in the item itself. The program crashes with two reasons: "HDF error" or "glibc detect: double free or corruption". The crashes happen when the memory consumption (by "top") exceeds 3 GB. The memory consumption for 10000 items is approximately: 401 - 880 MB; 401C - 295 MB, note: better than 401; 411 - 848 MB; 411C - 40000 MB!!! - extrapolated, it crashes at 754 items. Obviously there is a bug in chunking handling in version 4.1.1. Test with compression gives practically the same result. And there is a limit in NetCDF file complexity, at least for existing API implementation. Regards, Sergei -----Original Message----- From: netcdfgroup-bounces@xxxxxxxxxxxxxxxx [mailto:netcdfgroup-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Shibaev, Sergei Sent: 27 October 2010 14:32 To: netcdfgroup@xxxxxxxxxxxxxxxx Subject: [netcdfgroup] Netcdf-4.1.1 fault Hi, I have installed netcdf-4.1.1 with recommended hdf5-1.8.4patch1 and got an interesting effect. My program writes a complex netcdf4 file which combines data from many sources including hdf5 files, so both netcdf4 and hdf5 APIs are used simultaneously. The program can write only first 780 data items produced as copies from 4 hdf5 files and fails on nc_put_vara_short() with error -101: "Error at HDF5 layer"; the hdf error printout is attached. And this error is not "at HDF5 layer". When I downgrade netcdf back to version 4.0.1 (only netcdf, hdf5 is the same 1.8.4patch1) the problem disappears; the program successfully writes 6200 items, copies 26 hdf files. It seems that netcdf4.1.1 interferes with hdf5 library not only at file closing(netcdf file is opened before hdf5 files and closed last). Regards, Sergei Click https://www.mailcontrol.com/sr/5c2sCHrq!EHTndxI!oX7UtvqrUUui+B51SNEzqrhG 6z8NVVzf7ejwU5SFfSwNIvPJlWwE0Ej!UdTTjvRhZzcVQ== to report this email as spam.
Attachment:
nc_crash.png
Description: nc_crash.png
netcdfgroup
archives: