4GigB variable size limit

Hi Everyone,

I'm jumping into the discussion late here, but coming from a perspective of trying to find and develop an IO strategy which will work at the petascale level, the 4 GigB variable size limitation is a major barrier. Already a 1000^3 grid variable can not fit into a single netcdf variable. Users at NERSC and other supercomputing centers regularly run problems of this size or greater and IO demands are only going to get bigger. We don't believe chopping up data structures into pieces is a good long term solution or strategy. There isn't a natural way to break up the data and chunking eliminates the elegance, ease and purpose of a parallel IO library. Besides the direct code changes, analytics and visualization tools become more complicated as datafiles from the same simulation but of different sizes would not have the same number variables. Restarting a simulation from a checkpoint file on a different number of processors would also become more convoluted.

The view from NERSC is that if Parallel-NetCDF is to be viable option for users running large parallel simulations, this is a limitation that must be lifted...

Katie Antypas
NERSC User Services Group
Lawrence Berkeley National Lab

==============================================================================
To unsubscribe netcdfgroup, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
==============================================================================


  • 2007 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: