NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
"Robert E. McGrath" <mcgrath@xxxxxxxxxxxxx> writes: > In general, any update to the file in parallel needs to be done > thoughtfully, and generally should be done in batches rather than > small increments. > > Extending by single records is inefficient in all cases, but > very costly in parallel, since it updates a global state. > Does that mean parallel programmers generally don't use unlimited dimensions? Or they use them, but batch their extends to avoid this problem? Or do they use them, but do something else to prevent this problem. Thanks for helping me understand this! Ed -- Ed Hartnett -- ed@xxxxxxxxxxxxxxxx
netcdf-hdf
archives: