On 10/02/2014 01:24 AM, Samrat Rao wrote:
Thanks for your replies.
I estimate that i will be requiring approx 4000 processors and a total
grid resolution of 2.5 billion for my F90 code. So i need to
think/understand which is better - parallel netCDF or the 'normal' one.
There are a few specific nifty features in pnetcdf that can let you get
really good performance, but 'normal' netCDF is a fine choice, too.
Right now I do not know how to use parallel-netCDF.
It's almost as simple as replacing every 'nf' call with 'nfmpi' but you
will be just fine if you stick with UCAR netCDF-4
Secondly, i hope that the netCDF-4 files created by either parallel
netCDF or the 'normal' one are mutually compatible. For analysis I will
be extracting data using the usual netCDF library, so in case i use
parallel-netCDF then there should be no inter-compatibility issues.
For truly large variables, parallel-netcdf introduced, with some
consultation from the UCAR folks, a 'CDF-5' file format. You have to
request it explicitly, and then in that one case you would have a
pnetcdf file that netcdf tools would not understand.
In all other cases, we work hard to keep pnetcdf and "classic" netcdf
compatible. UCAR NetCDF has the option for an HDF5-based backend -- and
in fact it's not an option if you want parallel I/O with NetCDF-4 -- is
not compatible with parallel-netcdf. By now, your analysis tools surely
are updated to understand the new HDF5-based backend?
I suppose it's possible you've got some 6 year old analysis tool that
does not understand NetCDF-4's HDF5-based file format. Parallel-netcdf
would allow you to simulate with parallel i/o and produce a classic
netCDF file. But I would be shocked and a little bit angry if that was
actually a good reason to use parallel-netcdf in 2014.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA