[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20050510: 20050510: CONDUIT feed / GFS 00Z f000



Daryl,

Couple of quick comments:

500MB queue is small. You are receiving up to 1.2GB per hour on average:
http://my.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume1?pircsl4.agron.iastate.edu

The trade offs are, smaller queue will fit into available memory better, but the
relay to downstreams and pqact processing could benefit from more space.
If your only rel;ays are to internal machines, then you can probably get
by with 500MB if you split your pqact.conf processing into multiple
processes.

In the GEMPAK release, the $NAWIPS/ldm/etc/gen_pqact.csh script
asks if you want to generate a single or multiple pqact.conf files.
Anyhow, it provides the example lines which show how to use
multiple pqact.conf files, for example:

% $NAWIPS/ldm/etc/gen_pqact.csh

Generating pqact.gempak file using:
   GEMTBL = /home/gempak/GEMPAK5.8.2a/gempak/tables
   GEMPAK = /home/gempak/GEMPAK5.8.2a/gempak
Do you want to combine entries to a single pqact.gempak? [y/n] n
 
################################################################
Place the generated PQACT files into ~ldm/etc.
Use the following entries for the LDM ldmd.conf file: 
 
exec    "pqact -f ANY-NNEXRAD-CRAFT-NIMAGE etc/pqact.gempak_decoders"
exec    "pqact -f WMO etc/pqact.gempak_nwx"
exec    "pqact -f MCIDAS|NIMAGE etc/pqact.gempak_images"
exec    "pqact -f NNEXRAD|WSI|FNEXRAD etc/pqact.gempak_nexrad"
exec    "pqact -f CRAFT -p BZIP2/K[A-D] etc/pqact.gempak_craft"
exec    "pqact -f CRAFT -p BZIP2/K[E-K] etc/pqact.gempak_craft"
exec    "pqact -f CRAFT -p BZIP2/K[L-R] etc/pqact.gempak_craft"
exec    "pqact -f CRAFT -p BZIP2/K[S-Z] etc/pqact.gempak_craft"
----------------------------------------------------------------------------------

The above is particularly useful when FILE'ing every Level II and Level III
radar product. The 4 pqact's for craft were done to balance the 32 open
streams among the 124 radars which may not be necessary....but
it is an example of how to use the same pqact.conf file with different
subsets of the data stream pattern. The number of level III products
is actually quite alot- and really a load when people try to uncompress the
zlib products in realtime (which I wouldn't recommend).

The newer LDM releases will allow more streams from pqact, which is 1024 on 
Linux,
whereas it had been 32, but pqact can fall behind if it is heavily loaded eg 
with
slower decoders.  Issuing a "kill -USR2" twice to the pqact process will put 
the program in debug
mode and output "Delay:" messages that show how long it took to process 
something once
it arrived in your local queue. Generally this should be down under a few 
seconds.
Hint...don't leave in debug mode too long, since this will slow things down too-
issua another "kill -USR2" to go back to silent.

If the Delay ever gets to the age of the oldest product in your queue (shown by 
"pqmon")
then your data will overwrite before pqact acts on it.


Steve Chiswell
Unidata User Support


>From: Daryl Herzmann <address@hidden>
>Organization: UCAR/Unidata
>Keywords: 200505101824.j4AIO8P3026338

>Hi Steve,
>
>Wow, what a reply!  Please see my comments below.
>
>On Tue, 10 May 2005, Unidata Support wrote:
>
>> Your laptop address was either strange or not on the list, so I think 
>> that is why it bounced.
>
>Hmmm, wonder if my IP is not resolving?  I am using IMAP locally.
>
>> Anyhow, you can see what we received here for the 00Z GFS:
>
>Good, the problem is just at my end!
>
>> The above shows that Pircsl4 is only getting about 18GB perday, and very 
>> consistent in that amount, so likely not any temporary network trouble 
>> etc., and nothing new in the pattern, so it seems very well behaved. I 
>> see your request pattern is: (RUC2/#252 |MT.(avn|gfs).*DF.gr1) That 
>> would account for the volume being less than what we have to send you!
>
>Good.
>
>> That means, the data is getting to you in good time. As a result, if you 
>> are not seeing all the data on your disk, then likely your pqact 
>> proccess is falling behind so that the data is getting scoured out of 
>> your queue before you get it processed. If necessary, I can send you 
>> some code for your pq.c routine that will flag when pqact falls behind.
>
>That must be it.  I was running 6.2.1 and just upgraded to 6.3.0 to make 
>sure I am current.  My queue is 500 MB, so that should be okay.
>
>> So...things to look for: Are you running multiple pqact processes, or a 
>> single?
>
>single.
>
>> Splitting up the pqacts shares the load so they can process faster.
>
>Wow, I have never heard of doing this.  That is an interesting idea!
>
>> Are you decoding the data in that pqact.conf file too, or just 
>> FILE'ing as gribs as shown below?
>
>Just FILE
>
>> Since it is the 00Z f000 time, it might be that your system is 
>> particularly busy around this time, so IO is the bottleneck (check
>
>This was one of the machine I was complaining about on the ldm list. 
>RHEL3 sucks.  I am have upgraded to the U5 beta kernel and see if that 
>helps.  Going to RHEL4 will happen if this doesn't help.
>
>> Let me know if you have any questions.
>
>Thanks for the help.  Hope this email doesn't bounce too...
>
>daryl
>
>-- 
>/**
>  * Daryl Herzmann (address@hidden)
>  * Program Assistant -- Iowa Environmental Mesonet
>  * http://mesonet.agron.iastate.edu
>  */
>
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web.  If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.