[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20010607: problem ingesting Unidata-Wisconsin images at UCR (cont.)



>From: =?X-UNKNOWN?Q?Jimmy_Mejia_Fern=E1ndez?= <address@hidden>
>Organization: University of Costa Rica
>Keywords: 200106050208.f5528Sp02926 LDM ingest decode

Jimmy,

>Well, I did the change and that worked for a coupple of minutes, I
>mean it started downloading data but it stopped doing it.
>
>The configuration is the same, the output of notifyme is:
>
>$notifyme -vxl- -f MCIDAS -o 3600
>Jun 07 17:51:49 notifyme[5073]: Starting Up: localhost: 20010607165149.509
>TS_ENDT {{MCIDAS,  ".*"}}
>        NOTIFYME(localhost) returns OK
>Jun 07 17:51:49 notifyme[5073]: NOTIFYME(localhost): OK
>
>I ran:
>$pqcat -vl - -f HRS -p "^[YZ].[QRUT].*/mETA" > /dev/null
>Jun 07 17:55:24 pqcat: Starting Up (5076)
>Jun 07 17:55:24 pqcat: Exiting
>Jun 07 17:55:24 pqcat: Number of products 0
>
>Also:
>$pqcat -vl - -f MCIDAS > /dev/null
>Jun 07 17:57:11 pqcat: Starting Up (5078)
>Jun 07 17:57:11 pqcat: Exiting
>Jun 07 17:57:11 pqcat: Number of products 0
>
>And that's all,

Some investigation from this end indicates that rpcs are timeing out
during transfers to your machine.  This was determined by using pqsend
to send products directly to inti.efis.ucr.ac.cr (comments are from Steve
Chiswell):

motherlode.ucar.edu% pqsend -v -l - -h inti.efis.ucr.ac.cr -f MCIDAS -o 3600
Jun 07 18:57:10 inti.efis.ucr.ac.cr[15221]: Starting Up (15221)
Jun 07 18:57:11 inti.efis.ucr.ac.cr[15221]: sign_on(inti.efis.ucr.ac.cr): 
reclass: 20010607175711.139 TS_ENDT {{MCIDAS,  "^pnga2area Q[01]"}}
Jun 07 18:57:36 inti.efis.ucr.ac.cr[15221]: ship: RPC: Timed out:     7427 
20010 607180613.269  MCIDAS 000  pnga2area Q1 U3 205 GRAPHICS UNKBAND 5km 
20010607 1759
Jun 07 18:57:53 inti.efis.ucr.ac.cr[15221]: sign_on(inti.efis.ucr.ac.cr): 
reclass: 20010607175753.269 TS_ENDT {{MCIDAS,  "^pnga2area Q[01]"}}
Jun 07 18:58:02 inti.efis.ucr.ac.cr[15221]:     7427 20010607180613.269  MCIDAS 
000  pnga2area Q1 U3 205 GRAPHICS UNKBAND 5km 20010607 1759
Jun 07 18:59:59 inti.efis.ucr.ac.cr[15221]:   130648 20010607181108.128  MCIDAS 
000  pnga2area Q1 U1 193 GOES-9_IMG UNKBAND 20km 20010607 1500
Jun 07 19:00:43 inti.efis.ucr.ac.cr[15221]: ship: RPC: Timed out:   122752 
20010607181510.850  MCIDAS 000  pnga2area Q0 CA 1100 GOES-10_SND UNKBAND 14km 
20010607 1700

Looks like the default 25 second rpc timeout is being reached. Probably the
same thing is happening on your rpc.ldmd processes to motherlode which
result in all the disconnects.

It may be worth trying to get you to add a "-t 180" to your rpc.ldmd
invocation in the ldmadmin script to see if it makes enough different
so that he can get some products.  If you would like to try this, then
you should do the following:

1) login as 'ldm'
2) cd bin
3) make a backup copy of ldmadmin:

   cp ldmadmin ldmadmin.bak

4) edit ldmadmin:

   add the line:

    $cmd_line .= " -t 180";

   to the sequence:

# build the command line

    $cmd_line = "rpc.ldmd";

    if ($verbose) {
    $cmd_line .= " -v";
    }

    $cmd_line .= " -t 180";
    $cmd_line .= " -q $pq_path $ldmd_conf > $pid_file";

    `$cmd_line`;

5) stop and restart your LDM

   ldmadmin stop
   <wait for all LDM processes to exit>
   ldmadmin start


This may help you, and it may not.  The problem is that the connection
to your machine is not fast enough to reliably deliver products as
large as the images in the MCIDAS feed and the volume of data that you
have requested.  There is one thing you can do to try to mitigate
this -- cut down on the data requests you are making --!

For instance, right now you are requesting ALL images in the MCIDAS
feed and all products in the HDS, IDS, and DDPLUS feeds.  The rpc
timeout limit being reached is telling us that you can not do this and
expect to continue to get the data you are after.  Some things to
consider are:

1) you probably are not using all of the imagery in the MCIDAS feed.  In
   particular, are you using the images from GOES-West?  Do you use products
   like the Antarctic composite, Educational Floaters, MDR radar?  The
   answers to these questions are probably no, but we are not sure.

2) Requesting all of the products in the HRS feed is probably a bad idea
   given your connection.   There are a LOT of grids in the HRS feed that
   do not cover Costa Rica or much below the US.  It would be best for
   you to cut down on your requests by eliminating all of those grids
   that you never use.  This would save a lot of bandwidth and, therefore,
   increase the likihood that you will get the products that you really
   need.

Since I am not an expert on the grid products, I will defer
recommendations on how to cut down on those until later.  I can,
however, advise you on how to modify your MCIDAS image request to only
ask for the images that you really need.  Here is a list of product
header information for the various images in the MCIDAS feed:

The header of each product in the MCIDAS feed is of the form:

pnga2area Q. pc anum SAT BAND RES CCYYMMDD HHMM
          ^  ^    ^   ^    ^   ^     ^      ^__ time in [HHMM]
          |  |    |   |    |   |     |_________ date in [CCYYMMDD]
          |  |    |   |    |   |_______________ res info from SATBAND
          |  |    |   |    |___________________ band info from SATBAND
          |  |    |   |________________________ satellite name from SATANNOT
          |  |    |____________________________ default output AREA number
          |  |_________________________________ McIDAS ROUTE.SYS product code
          |____________________________________ broadcast time designator

The "normal" (non-CIMSS) GOES images (and composite images) are marked with
the "broadcast time designator" 'Q1'.  The CIMSS products are marked with
the "broadcast time designator" 'Q0'.

The list of 'pc' (product code) tags for the products and their meanings
are:

   pc      Description                       
 ---------+------------------------------------
   CA      CIMSS Cloud Top Pressure
   CB      CIMSS Precipitable Water
   CC      CIMSS Sea Surface Temperature
   CD      CIMSS Lifted Index
   CE      CIMSS CAPE
   CF      CIMSS Ozone
   UA      Educational floater I
   UB      GOES-West Western US Water Vapor
   UC      Educational floater II
   UI      GOES-East North America Infrared
   UR      Research floater
   UV      GOES-East North America Visible
   UW      GOES-East North America Water Vapor
   UX      Global Mollweide Infrared Composite
   UY      Global Mollweide Water Vapor Composite
   U1      Antarctic composite
   U3      Manually Digitized Radar
   U5      GOES-West Western US Infrared
   U9      GOES-West Western US Visible
   
It seems to me that the images that you will really want to receive are:

   UI      GOES-East North America Infrared
   UV      GOES-East North America Visible
   UW      GOES-East North America Water Vapor

and perhaps:

   UX      Global Mollweide Infrared Composite
   UY      Global Mollweide Water Vapor Composite

In order to request just these products, you can set your ldmd.conf request
line for MCIDAS to look like:

request MCIDAS  "^pnga2area Q1 U[IVWXY]"      motherlode.ucar.edu

This will effectively eliminate 15 or 13 (depending on whether or not
you decide to continue receiving the Mollweide composites) products, a
number of which are quite large, from the stream heading your way.
Such a reduction will help deliver products more reliably to your
machine.

The following comments that I have clipped from the pqact.conf file on
motherlode may help you decide which grids you can eliminate from your
ldmd.conf request:

# NOAAport ETA grids
# Grid #211 80km CONUS:    ^[YZ].Q.*/mETA
# Grid #212 40km CONUS:    ^[YZ].R.*/mETA
# Grid #215 20km CONUS:    ^[YZ].U.*/mETA
# Grid #214 47.5km Alaska: ^[YZ].T.*/mETA

# RUC/MAPS model output
# Grid #211 CONUS   80km: ^[YZ].Q.*/mRUC
# Currently, only grid #211
#
# NGM model output
# Grid #211 CONUS   80km: ^[YZ].Q.*/mNGM
# Grid #207 Alaska  95km: ^[YZ].N.*/mNGM
# Grid #202 CONUS  190km: ^[YZ].I.*/mNGM
# Grid #213 CONUS 47.5km: ^[YZ].H.*/mNGM

# NOAAport MRF grids
# Grid #201 N. Hemisphere 381km: ^Y.A... KWBH
# Grid #202 CONUS         190km: ^Y.I... KWBH
# Grid #203 Alaska        190km: ^Y.J... KWBH
# Grid #204 Hawaii        160km: ^Y.K... KWBH
# Grid #205 Puerto Rico   190km: ^Y.L... KWBH

# AVN model output
# Grid #201 N. Hemisphere 381km: ^Y.A... KWBC.*(/mAVN|/mSSIAVN)
# Grid #202 CONUS         190km: ^Y.I... KWBC.*(/mAVN|/mSSIAVN)
# Grid #203 Alaska        190km: ^Y.J... KWBC.*(/mAVN|/mSSIAVN)
# Grid #211 CONUS          80km: ^Y.Q... KWBC.*(/mAVN|/mSSIAVN)
# Grid #213 CONUS          95km: ^Y.H... KWBC.*(/mAVN|/mSSIAVN)

We feel certain that you most likely do not use the ETA or RUC/MAPS
grids.  You probably do use some of the MRF and AVN grids, but you
probbably don't use them all (like the CONUS and Alaska grids).
Eliminating the grib products from the HRS stream request will greatly
free up bandwidth and so help you receive the other data more
reliably.

One last comment.  The products in the IDS and DDPLUS feeds are small
enough that you can safely forget about having to tailor requests for
them.

>the ldmd.conf is this one:
>
>########################################################
>exec    "pqbinstats"
>exec    "pqact"
>#exec   "pqsurf"
>request MCIDAS  "^pnga2area Q[01]"      motherlode.ucar.edu
>request HDS|IDS|DDPLUS       ".*"    motherlode.ucar.edu
>allow   ANY
>
>^((localhost|loopback)|(127\.0\.0\.1\.?$)|([a-z].*\.unidata\.ucar\.edu\.?$|(
>[a-z].*\.efis\.ucr\.ac\.cr\.?$)))
>
>allow   ANY     motherlode.ucar.edu
>
>accept ANY ".*" ^((localhost|loopback)|(127\.0\.0\.1\.?$))
>accept ANY ".*" ^((inti.efis.ucr.ac.cr|inti)|(163\.178\.110\.213\.?$))
>
>#########################################################

Modifying this to cut down on your requests is probably the best thing to
do.

>The pqact.conf look like this:
>
>########################################################
>
># ********** Los datos de radiosondeo gastan mucho espacio
>#        **********!!!!!!!
>
>#WMO   ^([^H][A-Z])([A-Z][A-Z])([0-9][0-9]) (....) ([0-3][0-9])([0-2][0-9])
>#       FILE    data/\2/\4/\6/\1\3.wmo
>
>HRS     ^[YZ]..... KWB. ([0-3][0-9])([0-2][0-9]).*(/mAVN|/mSSIAVN)
>        FILE    data/GRIB/(\1:yy)(\2:mm)\1\2.avn
>MCIDAS  ^pnga2area Q. (..) (.*) (.*) (.*) (.*) (........) (....)
>        PIPE    -close
>        pnga2area -d /home/ldm/data/mcidas/area -r \1,\2
>
>MCIDAS  ^pnga2area Q. CA .... (.*) (.*) (.*) (........) (....)
>        PIPE    -close
>        pnga2area /home/ldm/data/gempak/nport/SOUNDER/\3/CTP/CTP_\4_\5
>
>MCIDAS  ^pnga2area Q. CB .... (.*) (.*) (.*) (........) (....)
>        PIPE    -close
>        pnga2area /home/ldm/data/gempak/nport/SOUNDER/\3/PW/PW_\4_\5
>########################################################
>
>Probably the problem is with this file, but that is strange because it was
>working before.

The problem is most likely the volume of data trying to be sent to your
machine, not your machine's ability to decode the data once it is there.
The pqact.conf file entries only have to do with decoding, so we can ignore
it for the moment.

>Thank Anne, very much for your great help.

Anne will be back on Wednesday.  I think that the first thing to try is
cutting down on the MCIDAS image request in ldmd.conf.  After doing
that, let's see how your ingestion looks.  If there is still too much
being sent give the connection speed, we need to cut down the HRS
request as much as possible.

Please let us know if you have questions about my comments above.

Tom Yoksas