Re: [netcdfgroup] HDF backend can't add metadata?

Hi Tom,

Version 4.1.1 has a bug which causes huge memory consumption and various
errors appear when NetCDF API allocates > 3 GB. You can see it in "top"
printout.
The solution is simple: downgrade to version 4.0.1 or use 4.2 betas.

Sergei

-----Original Message-----
From: netcdfgroup-bounces@xxxxxxxxxxxxxxxx
[mailto:netcdfgroup-bounces@xxxxxxxxxxxxxxxx] On Behalf Of tom fogal
Sent: 22 January 2011 22:59
To: netcdfgroup@xxxxxxxxxxxxxxxx
Subject: [netcdfgroup] HDF backend can't add metadata?


I'm having issues getting NetCDF to work with the HDF5 backend
("NetCDF4" files).  64bit offset files are working fine, even in
parallel [0].

I get the following error at nc_create_par:

  ** ERROR **: could not create mask-data output file: -105, Can't add
  HDF5 file metadata

Occasionally, some of my processes get a bit further, completing a few
def_dim and a def_var call; then I get:

  ** ERROR **: finishing definitions failed: HDF error (-101)

from those processes' nc_enddef call.  Note the leading messages come
from my software, not nc_strerror.

I have tried with both the NC_MPIPOSIX and NC_MPIIO flags.  I've tried
with an info object created from an MPI_Info_create call, as well as
MPI_INFO_NULL.

My nc_create_par call is pretty simple; basically something like:

  nc_create_par(maskfile, NC_NETCDF4 | NC_MPIIO, MPI_COMM_WORLD,
                MPI_INFO_NULL, &nc);

of course, as I mentioned the flags/info change up a bit depending on
what I'm testing.

The NetCDF installation on the cluster I'm using lacks the parallel
functions, so I've built HDF5 and NetCDF in my $HOME.  These seemed to
go through fine.  The configure lines I used for both are at the end of
this email [1,2].  I'm using OpenMPI 1.3.3 for MPI.  The stack is built
with the v11.1 intel compiler.  HDF5 is version 1.8.5-patch1 and NetCDF
is v4.1.1.

What could be going wrong?

-tom

[0] Though I'm seeing incredibly poor scalability, which is part of what
made me want to look at the NetCDF4 backend.  I observe
*increasing* runtime for a strong scalability study.  Is this consistent
with others' experiences? [1] HDF5 configure line:
  ./configure CC=mpicc --prefix=${HOME}/sw --enable-parallel
--disable-fortran [2] NetCDF configure line:
  ./configure CC=mpicc --prefix=${HOME}/sw --with-hdf5=${HOME}/sw \
    --enable-c-only --enable-netcdf4 --disable-dap --enable-shared
--with-pic

_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx
For list information or to unsubscribe,  visit:
http://www.unidata.ucar.edu/mailing_lists/ 


Click
https://www.mailcontrol.com/sr/X!cQ4CjVwVnTndxI!oX7UoWpENiwadNGp78Jot6UE
8AZnpIJvO3F9qTO0LlkgiC6qPqm5cDNedT02A3zwbO5JA==  to report this email as
spam.



  • 2011 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: