Re: overcoming netcdf3 limits



Greg Sjaardema wrote:
As a quick answer to the question, we (Sandia Labs) use netcdf
underneath our exodusII
file format for storing finite element results data.

If the mesh contains #nodes nodes and #elements elements, then there
will be a dataset of the size #elements*8*4 (assuming a hex element with
8 nodes, 4 bytes/int) to store the nodal connectivity of each hex
element in a group of elements (element block). Assuming 4GiB, this
limits us to ~134 Million elements per element block (using CDF-2) which
is large, but not enough to give us more than a few months breathing
room.    Using CDF-1 format, we top out at about 30 million elements or
less which is hit routinely.

Im not sure if I understand the problem yet:

In the file you sent me, you use time as the record variable.

Each record variable must be less than 2^32, not counting the record dimension. 
So you can have about 2^29 elements, assuming each element is 8 bytes. And you 
can have 2^32 time steps.

The non-record variables are dimensioned (num_elements, num_nodes). Number of 
nodes seems to be 8, and these are ints, so you have 32 bytes * num_elements, 
so you can have a max of 2^27 elements = 134 million elements. Currently the 
largest you have is 33000. Do you need more than 2^27 ?

I think Im not sure what you mean by "a few months breathing room".


There is a pdf file at
http://endo.sandia.gov/SEACAS/Documentation/exodusII.pdf that shows
(starting at page 177) how we map exodusII onto netcdf.  There have been
some changes since the report was written to reduce some of the dataset
sizes.  For example, we split the "coord" dataset into 3 separate
datasets now and we also split the vals_nod_var into a single dataset
per nodal variable.

--Greg


John Caron wrote:
Hi Rob:

Could you give use case(s) where the limits are being hit?
I'd be interested in actual dimension sizes, number of variables,
whether you are using a record dimension, etc.

Robert Latham wrote:
Hi

Over in Parallel-NetCDF land we're running into users who find even
the CDF-2 file format limitations, well, limiting.
http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/NetCDF-64-bit-Offset-Format-Limitations.html


http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#Large%20File%20Support10


If we worked up a CDF-3 file format for parallel-netcdf (off the top
of my head, maybe a 64 bit integer instead of an unsigned 32 bit
integer could be used to describe variables), would the serial netcdf
folks be interested, or are you looking to the new netcdf-4 format to
take care of these limits?

Thanks
==rob

==============================================================================

To unsubscribe netcdfgroup, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
==============================================================================




==============================================================================
To unsubscribe netcdfgroup, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
==============================================================================


  • 2007 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: