One can get hold of some pretty impressive horse power these days — cpu core
count and massive system RAM.
I have experimental access to one such and am wanting to test batch submission
of forecast maps to see just what I can get away with. Could I get 20 gdplot2
jobs of 20 gfs forecast hours running on 20 cores simultaneously?
I think I’m asking what’s a good way to submit gdplot2 image production in the
background?
eg., if I currently plot GFS 250mb height, winds, and isotaches by running each
forecast hour successively with:
————
#!/bin/csh
# restore file gfs.215.nts has appropriate GDFILE specification
foreach fcst ( `seq -w 000 006 120` )
set outfile = 250wnd_gfs_f${fcst}.gif
gdplot2<<END_INPUT
restore gfs.215.nts
GDATTIM = f${fcst}
\$MAPFIL = TPPOWO.GSF
GLEVEL = 250
GVCORD = pres
GDPFUN = knts(mag(wnd)) ! hght ! kntv(wnd)
CINT = ! 120 !
TITLE = 31/-3/GFS FORECAST INIT ^ ! 31/-2/${fcst}-HR FCST VALID ?~ !
31/-1/250-HPA HEIGHTS, WINDS, ISOTACHS (KT)
DEVICE = GIF|$outfile|1880;1010
FINT = 70;90;110;130;150;170
FLINE = 0;5;10;17;13;15;30
TYPE = f ! c ! b
r
exit
END_INPUT
gpend
end
————
how could I modify this effort to submit each forecast hour job simultaneously
onto the system?
And I’m not averse to bash shell. If it’s much easier with bash, I’ll take any
suggestions.
Neil