[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20021022: data ingestion/decoding/filing/scouring at STC (cont.)



>From: "Anderson, Alan C. " <address@hidden>
>Organization: St. Cloud State
>Keywords: 200209232129.g8NLTo103996 McIDAS-X telnet ssh

Hi Alan,

re: how students are using McIDAS
>I don't think we use the Function Key menu much at all.  Most all
>the users I see are using MCGUI,  with some command line use by 
>faculty.

OK.  This is helpful.

>In any case, please make changes as you see best and if 
>this means the Function Key menu no longer works, that is ok.

OK, the green light :-)

>the information you need is as follows  
>               terminal name  cumulus.stcloudstate.edu
>                          user  student
>                           pw   xxxxx

>Again, if what needs to be done would be simplified by our 
>upgrading to ver 2002 (which we plan to do in a month or two
>anyway) then don't bother now.  We have already increased the
>no. of days of data for the other types, and the images can
>stay as they are for a bit longer.

Yesterday afternoon, I did the following to upgrade things on waldo:

o installed LDM 5.2.1

o installed ldm-mcidas-7.8.0 decoders

o added decoding of Unidata-Wisconsin (IDD feed type UNIWISC (which is
  also known as MCIDAS)) into a directory hierarchy that is needed/desired
  by/for GEMPAK:

  /usr/local/ldm/data/gempak/images/sat/...

  In reality, this is:

  /var/data/ldm/gempak/images/sat/...

o setup scouring of the /usr/local/ldm/data/gempak/images/sat subdirectories.
  This is done through the script /usr/local/ldm/util/prune_images.csh
  which is run from cron.  prune_images.csh is designed to be edited
  at the local site to set things needed for the scouring:

  PATH          <- has to be set so that 'prune_images.csh' can be found
                     for reentrant invocations
  KEEP          <- set to the number of files to keep during the scour
  areadir       <- set to the top level directory under which the
                     script will scour

  I setup 'prune_images.csh' to KEEP 12 of each kind of image being
  ingested in Unidata-Wisconsin datastream of the IDD.  If you want
  to keep more, all you have to do is:

  <login as 'ldm'>
  cd util
  <edit 'prune_images.csh and change KEEP=12 to the number you want to keep>

  The way things are setup right now, all of the Unidata-Wisconsin images
  are scoured equally (meaning that the same number will stay on
  disk).  This can be changed by running multiple instances of
  prune_images.csh (suitably renamed or modified to accept command line
  input for the number to keep) from cron.  The number you will want to
  keep is something that you will need to decide, and it will utimately
  depend on how much disk space you want to dedicate to image use.

o I setup ingestion and decoding of the 6 km national base reflectivity
  composite from the FNEXRAD feed.  Those images are being saved in the
  /usr/local/ldm/gempak/nexrad/NEXRCOMP/6km directory.

  I also setup ADDE serving of these data in the NEXRCOMP dataset.  Try
  the following on waldo:

  DSINFO IMAGE NEXRCOMP
  IMGLIST NEXRCOMP/6KN0R-NAT.ALL

  Other composite radar images are available in the FNEXRAD feed:

   1 km national base reflectivity
   2 km national 1-hour precipitation totals
   4 km national storm total precipitation
  10 km national radar coded message composites

  The 1 km national base reflectivity composites are BIG.  The decode
  into files that are each 14 MB in size.  The 2 km 1-hour and 4 km
  storm total precipitation products are considerably smaller: the
  1-hour composite is 14/4 MB, and the storm total is 14/16 MB.
  Ingestion and decoding of these can be added if/when you like.

o I setup scouring of the 6 km national base reflectivity by running
  /usr/local/ldm/util/prune_nexrcomp.csh from cron.  I setup
  prune_nexrcomp.csh in the same way as prune_images.csh above.

o setup McIDAS ADDE to serve the images being decoded into the
  GEMPAK hierarchy instead of the ones being decoded according to the
  McIDAS routing table scheme.  I also left serving of the composite
  imagery the way it was.  The net effect of these changes will be that
  you will see more of the UW images in the ADDE datasets and the same
  number for the composites that are being created locally.

o I downloaded McIDAS-X v2002a into ~mcidas along with the v2002
  version of mcinstall and mcinet2002.sh; unpacked it into the
  ~mcidas/mcidas2002 directory hierarchy; and built the -X and -XCD
  executables for v2002.  I did not install the new code because you
  said that you did not want to jump to the new version just yet.

o I looked at the setup in the 'student' account on cumulus, and see
  that people using it were most likely using the McIDAS command
  line instead of either the MCGUI or Fkey menu.

  I also looked at the McIDAS setup under the user 'mcidas' on cumulus
  and made a couple of changes:

  I removed the pointing at ADDE datasets on waldo that do not exist
  (e.g., WNEXRAD, WNOWRAD, RTGRIDS).  I also added pointing at waldo
  for NEXRCOMP since the 6 km national composite images are now available
  there.  I also setup pointing at other datasets that are accessible
  from ADDE servers on the internet:

  <logged on to either/both cumulus or waldo as 'mcidas'>
  cd workdata
  dataloc.k ADD GINICOMP pscwx.plymouth.edu
  dataloc.k ADD GINIEAST papagayo.unl.edu
  dataloc.k ADD GINIWEST adde.ucar.edu
  dataloc.k ADD AMRC     uwamrc.ssec.wisc.edu
  dataloc.k ADD ME7      io.sca.uqam.ca

  These datasets are useful/usable in current versions of MCGUI, so when
  you upgrade, these will be nicely available.

I think that is enough for one email :-).  What you need to do now is
decide on how many images of each kind you want to keep on waldo.
After that decision is made, you will need to either adjust the pruning
scripts bein run from cron, or run more copies of prune_images.csh
(suitably modified) as I hinted at above.

For now, I left the images being decoded in two ways: the original one
that uses the McIDAS routing table, and the new way that files the
products into a data hierarchy.

Please let me know of any questions that come to mind or problems that
you may see.

Tom