performance on a PC

Hi Bill,

> I've compiled the netCDF libraries, netcdf.lib and xdr.lib (version
> 2.0, according to the user's guide) on a 33 Mhz 486 with 300 M disk
> drive (DOS 5) the compiler is MSC 5.1 (Borland C had some problems,
> fyi.)  i converted an 80 M file into netcdf format - this took about 2
> hours. for the sake of comparison i just wrote the data out using the
> same array structure, data[1288][8] and it took ten minutes. i suppose
> we have a hit on performance due to the xdr overhead or am i just lost
> in the ozone?

The performance of netCDF is highly dependent on the type of data being
written, as well as how you write it (for example 1288*8 calls to
ncvarput1() can be expected to be significantly slower than one call to
ncvarput()).  Byte arrays require almost no XDR overhead, whereas floating
point arrays require a call to an XDR routine for each value.  As discussed
in the chapter in the User's Guide on performance and file structure, for
optimum performance it is also important to avoid calls to ncredef(), to use
fill values correctly to avoid writing them once and then overwriting them,
and to write your data in something close to the order it will be stored on
disk.

Considerable optimization is possible if you substitute a platform-specific
I/O layer to replace the portable stdio layer under XDR.  We are planning to
release a UNIX-specific I/O layer replacing stdio for netCDF that appears to
provide a 40-50% improvement on UNIX systems.

Our portable implementation of netCDF is not intended to compete with
application-specific and platform-specific binary I/O; there is a cost for
the benefits of application- and platform-independent data.

> an ancillary (and naive) query - how many people use the netcdf format?

We don't know, but the netcdfgroup mailing list currently has over 200
addresses.

--Russ


  • 1992 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: