[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Status



On Tue, 25 Jan 2000, Ekaterina Radeva wrote:

> Hi Robb,
> 
> Now that the AMS conference is history we can proceed testing the CMC feed
> for Unidata users. BTW, a few of our colleagues went to the conference,
> and two of them stopped by at the Unidata session. Based on your web site,
> we had provided them with an overview of the Unidata project and the
> supporting software. They seemed quite impressed with this whole community
> effort.

Ekaterina,

Thanks for the kudos,

> 
> To do the testing, I have put in place two cron jobs: one, running on our
> production machine, compounds the grib files to be sent out, and transfers
> them to the LDM host machine; the other, running on the LDM host machine,
> inserts the files/products into the ldm queue. The forecasts of our
> regional production run, based on 00 UTC data will be inserted in the
> queue at around 4 UTC each day, whereas those based on 12 UTC data will be
> inserted at around 16 UTC. This represents a delay of ~ 1 hour after the
> production time, so is as close to real-time dissemination as it gets. 
> 
That sounds good.  I added a request to the GEM server and restarted the
LDM on shemp. So any data you insert into the queue should be sent to
shemp now. ie.

request EXP "CMC_GEM_reg_.*" ftp.cmc.ec.gc.ca

You should see a request entry in your ldmd.log files from shemp.

> The cron job that does the insertion, also uses pqexpire to purge products
> that are more than 5 days old, because that is the queue's capacity. I
> could include the pqexpire in the ldm.conf file, but I would have to
> ask our support people to restart the server, which can be problematic.
> 

A good rule of thumb is to only use about 90% of the LDM queue capacity
because of the overhead of inserting products.

> As discussed, there will be 366 products per run (732/day) which names
> will be used as product identifiers. The names are in the form: 
> CMC_GEM_reg_$var_L$level_H$hour_P$phour.grib, where 
> 
> $var is the field constituting the file (tt=temperature; es=dew-point
>  depression; gz=geopotential height; p0=sfc pressure; pn=sea-level
> pressure; uv=u- and v- wind components; ww=vertival velocity;
> pr=precipitation accumulation; rt=precipitation rate; nt=cloud cover)
> 
> $level is 1000, 850, 700, 500, 400, 250, 150, 100 hPa + sfc
> 
> $hour is 00/12 if this is a run based on 00/12 UTC data
> 
> and $phour is the forecast hour, in this case 0 though 48 at 6 hour
> interval.
> 
> This product identifier gives information avout the data source (CMC), the
> producing model (GEM regional configuration), the data themselves
> (variable, vertical level), the initial hour, the forecast hour, and the
> data type (GRIB). 
> 
> What is missing is information about the grid, and the model time
> (year/month/day). It seems to me that the former does not need to be
> included in the product id. 

The LDM needs the day of the month to produce the year and month, so it
will be hard for the end user to produce a file with year/month/day.  I
think it would be good to include the year/month/day in the product
header somehow.

The model time is important, however. I don't
> want to include it in the file names, because the files will keep
> accumulating, whereas now they are overwritten after 24 hours.

I understand your point but included with the LDM package is a scour
script that's purpose is to periodically purge files.  It also has a
configuration file, scour.conf to set the purge times.  The scour script
is run out of cron usually on a 3 hour period. ie,

# Purge LDM data
#
0 1,4,7,10,13,16,19,22 * * * bin/ldmadmin scour >/dev/null 2>&1
#

 I thought
> that the users could use the ingestion time to get the year/month/day of
> the model time (the hour is included in the product id), provided, of
> course, that the ingestion by the end users is done on the same day as the
> production/insertion in the queue.

This actually gives the system time, not the time that the product
represents.

> 
> Please, let me know whether the product ids are explicit enough. The
> insertion cycle is supposed to start this evening so  we can
> test the data capture/decoding at your end whenever you can make the time. 


I think the product ID's are good, they just needs the addition of the
year/month/day for product time.

> 
Let me know what you think,

Robb...

> With best regards,
> Ekaterina
> ______________________________________________________
> 
> Division de l'implementation et services operationnels
>    Centre meteorologique canadien
> Implementation and Operational Services Division
>    Canadian Meteorological Centre
> 
> e-mail: address@hidden
> tel: (514) 421-4646
> ______________________________________________________
> 
> On Thu, 6 Jan 2000, Robb Kambic wrote:
> 
> > On Wed, 5 Jan 2000, Ekaterina Radeva wrote:
> > 
> > > Robb,
> > > 
> > > Happy New year and all the best in the new millenium !
> > 
> >  Ekaterina,
> > 
> > Thanks for the greeting.  I wish you the best in the millenium also.
> > 
> > The size and product ID's look good. So it's
> > looks like we need to set up a test time, etc.  The rest of this week and
> > next week are all tied up with preparation and attending the AMS
> > conference in LA.  So  in two weeks we can start the process. I'll
> > e-mail you when I get back or if I somehow forget, remind me and we can
> > get started.
> > 
> > Robb...
> > 
> > > 
> > > I think we are ready to test our ldm configuration.
> > > 
> > > By means of the pqinsert utility, I put a subset of the products we would
> > > like to distribute via the Unidata IDD system in our local queue.
> > > The ldm host machime here is ftp.cmc.ec.gc.ca and the queue path is
> > > /data/dns2/ldm/ldm.pq (different from the default path). I included the
> > > ALLOW statement for shemp.unidata.ucar.edu in etc/ldmd.conf. 
> > > 
> > > I put in the queue 80 products (GRIB files) with names of the type:
> > > CMC_$var_$level_2000010500_$phour, where 
> > > 
> > > $var is the field constituting the file (tt=temperature; es=dew-point
> > > depression; gz=geopotential height; p0=sfc pressure; pn=sea-level
> > > pressure; uv=u- and v- wind components; ww=vertival velocity;
> > > pr=precipitation accumulation; rt=precipitation rate; nt=cloud cover)
> > > 
> > > $level is 1000, 850, 700, 500, 400, 250, 150, 100 hPa + sfc
> > > 
> > > 2000010500 is the initial time of our regional model forecast,
> > > 
> > > and $phour is the forecast hour, in this case 0 or 6. When the
> > > transmission strts for real, we plan to send forecasts up to 48 hour, at
> > > every 6 hours. 
> > > 
> > > The size of the products/files is 19K, except for the wind where both
> > > components are included in the same file and the size doubles.
> > > 
> > > Once you receive the GRIB files, I would like you to try decoding them
> > > with the Unidata GRIB decoder. I guess that this is the decoder of choice
> > > for the Unidata university users, so we have to make sure that it is
> > > compatible with our GRIB. I could test the decoder here, but it will
> > > produce the netCDF files, which we are not familiar with. 
> > > 
> > > This is all for now. Let me know how it goes. I'll keep my fingers
> > > crossed.
> > > 
> > > With best regards,
> > > Ekaterina
> > 
> 

===============================================================================
Robb Kambic                                Unidata Program Center
Software Engineer III                      Univ. Corp for Atmospheric Research
address@hidden             WWW: http://www.unidata.ucar.edu/
===============================================================================