NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Hi Rob,I've done some parallel NetCDF4 performance tests with ROMS model almost a year ago with NetCDF4 alpha release. At that time, I am pretty sure NetCDF-4 can successfully go to collective IO calls (MPI_File_write_all with set_file_view) in HDF5 layer. But I remembered there are some parameters being set wrong inside NetCDF-4 that I have to change so that collective IO calls can be passed to HDF5. I think Ed may have already fixed that but I may be wrong.
Another possibility is that HDF5 "magically" figure out your case is not good or not possible for collective IO and it will change the route to do an independent IO call instead. To verify that, we have to get your program, the platform, mpi-io compiler information.
Regards, Kent At 05:06 PM 9/17/2007, Robert Latham wrote:
Hi I'm continuing to look at NetCDF-4 and have found that it's not actually doing collective I/O. Even though it's the default, I've called nc_var_par_access with NC_COLLECTIVE just to be sure. I can verify in Jumpshot that all (in this case) 4 processors are calling independent I/O routines (MPI_File_write_at), even though I was expecting to see collective I/O (MPI_File_write_all or MPI_File_write_at_all). This appears to be the case for both writes and reads. In case it matters, I'm using an exceedingly small dataset. ==rob -- Rob Latham Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B _______________________________________________ netcdf-hdf mailing list netcdf-hdf@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/
netcdf-hdf
archives: