On Tue, Jan 15, 2013 at 03:55:44PM -0500, Ben wrote:
> Does anyone have any tricks I haven't thought of or has seen the
> same thing with parallel IO performance? There really aren't that
> many things one can play with other than setting the MPI hints or
> changing the access type for variables (collective or independent).
> So far I have been using intel 11, intel mpi 3 on the gpfs file
> system but I plan to play with this on newer intel versions,
> different MPI stacks, and on lustre instead of gpfs.
You are probably seeing an interaction between GPFS and MPI-IO. Inten
MPI 3 isn't, to my knowledge, particularly tuned to GPFS.
Fortunately the one tuning step that helps GPFS is to align "file
domains". Set the "striping_unit" hint to the size of your GPFS file
system block size. (you can get that with the 'stat -f' command line
tool: hopefully it's something like 4 MiB).
So, set "striping_unit" to "4194304" and let me know if that helps.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA