Re: Performance problem with large files

>all the data for t=1, then for t=2 etc. Your description of the
>array indices means that each subarray is scattered through the
>entire file and requires accessing almost every file block. Things
>should be a lot better if you write subarrays of 8000 x 3 x 1 or if
>you can't do this, rearrange the file so that the 8000 dimension is
>unlimited rather than the 16000 dimension.

  This is an important point.  We used to store large grids using the 
unlimited dimension as one of the major axes and discovered that 
netCDF's performance varied from good to awful.  After poking around,
we discovered that making array dimensions static wherever possible
sped up the grid writing dramatically.

  I think that the design of the netCDF unlimited dimension is probably
best thought of as a way to store "frames" of data instead of using it
as a way to create unlimited 'dimensions'.. 

  What do the designers think about this ?

bcl
blincoln@xxxxxxxxxx






  • 1999 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: