On 01/27/2012 01:22 PM, Rob Latham wrote:
On Wed, Jan 25, 2012 at 10:06:59PM -0900, Constantine Khroulev wrote:
Hello NetCDF developers,
My apologies to list subscribers not interested in these (very)
technical details.
I'm interested! I hope you send more of these kinds of reports.
When the collective parallel access mode is selected all processors in
a communicator have to call H5Dread() (or H5Dwrite()) the same number
of times.
In nc_put_varm_*, NetCDF breaks data into contiguous segments that can
be written one at a time (see NCDEFAULT_get_varm(...) in
libdispatch/var.c, lines 479 and on). In some cases the number of
these segments varies from one processor to the next.
As a result as soon as one of the processors in a communicator is done
writing its data the program locks up, because now only a subset of
processors in this communicator are calling H5Dwrite(). (Unless all
processors have the same number of "data segments" to write, that is.)
Oh, that's definitely a bug. netcdf4 should call something like
MPI_Allreduce with MPI_MAX to figure out how many "rounds" of I/O will
be done (this is what we do inside ROMIO, for a slightly different
reason)
But here's the thing: I'm not sure this is worth fixing. The only
reason to use collective I/O I can think of is for better performance,
and then avoiding sub-sampled and mapped reading and writing is a good
idea anyway.
well, if varm and vars are the natural way to access the data, then
the library should do what it can to do that efficiently. The fix
appears to be straightforward. Collective I/O has a lot of advantages
on some platforms: it will automatically select a subset of processors
or automatically construct a file access most closely suited to the
underlying file system.
==rob
Was this ever fixed?
--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA, Boulder Office FAX: 303-415-9702
3380 Mitchell Lane orion@xxxxxxxx
Boulder, CO 80301 http://www.nwra.com