[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #CZE-486853]: GFS grid question



Hi,

re:
> Did you see the screen shots I sent with my previous e-mail, I believe
> those answer some of your questions.

I glanced at the graphs.

re:
> There are NGRID messages that contain the string GFS so my notifyme
> command is not helping illustrate the problem.
> notifyme -v -f ANY -h localhost -l onnoaaport -p "gfs"
> 
> I should update the command to:
> notifyme -v -f ANY -h localhost -l onnoaaport -p "gfs|GFS"
> to match the requests in ldmd.conf - correct?

The 'notifyme' invocation should be:

notifyme -v -f ANY -h localhost -l onnoaaport -p "(gfs|GFS)"

This pattern was the same as the one I recommended for your
consolidated REQUEST line.

re:
> I do know that for the 0Z and 6Z model run, using your single regular
> expression, I missed 102 of 845 grids. The missing grids are for fhours
> that are a multiple of 6 out to fhour 240

From NGRID or from CONDUIT?  I think that I have mentioned that NCEP has
been having problems delivering CONDUIT data with low latencies lately.
Also, CONDUIT users have complained about not getting full sets of GFS
data going back to late spring or early summer.  NCEP has been notified
by the problem(s) on many occasions, and they have tried various things
to mitigate the problem(s), but, so far, nothing has worked completely.
Sometimes all products are sent, and sometimes some are missing.  This
problem really showed up when the GFS model was upgraded in the late
spring.  The tangible effect of the upgrade was more and larger products
so that the feed volume, which is dominated by GFS data, jumped by up
to 20 GB/hr at peak.

re:
> Prior to the 12Z run, I reverted back to using two requests. The 12Z GFS
> model run is now in and I have the full compliment of 845 grids.

Since the single REQUEST represented the union of the two you had,
the difference must be in there being two feeds REQUESTs instead of one.
If this is true, then it would raise the question about the network
connection to the machine being able to handle all of the data in
a single pipeline, or some other thing like performance of the
machine itself.

re:

> Jerry mentioned there might be a problem if the requested queue size is
> more than the memory on the machine. The VM has only 24 GB of memory,
> should I reduce queue size from 32 to maybe 20?

Ah Ha!  Yes, using a queue that is larger than physical memory will
cause the machine to thrash!  We've run tests of this sort over an
extended period of time, and we felt like we "got away with it" only
because the machine that the test was run on was only relaying data,
it was not processing any of it.

re:
> should I reduce queue size from 32 to maybe 20?

It is hard to say how large you can make your queue since we don't
know about all of the things the machine is trying to do.  I would
suggest, however, that, if you are running XCD on this machine, the
queue should be much smaller than 20 GB.  I would try 16 GB and then keep
a close eye on the machine's performance to make sure that processing
is being done correctly. Yes, I realize that 16 GB is less than the
average amount of data received per hour, but one must balance the
usefulness of the LDM being able to detect and reject duplicate products
with the need to be able to execute processing efficiently.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: CZE-486853
Department: Support LDM
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.