[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[CONDUIT #HGF-659268]: setting up new CONDUIT feed



Hi Carissa,

I'm finally getting back to you on your inquiry; sorry for the delay:

re:
> NCEP is starting the work on getting the new LDM boxes setup for the new
> CONDUIT feeds. I am going through the tables and want to clean out any
> old entries if possible. Can you peak at the lists below from the ldmd.conf
> and let me know if you recognize anything outdated.
> 
> # CONDUIT hosts
> allow   ANY     ^conduit\.unidata\.ucar\.edu    .*      ^data2/TIGGE_resend
> allow   ANY     ^conduit1\.unidata\.ucar\.edu   .*      ^data2/TIGGE_resend
> allow   ANY     ^conduit2\.unidata\.ucar\.edu   .*      ^data2/TIGGE_resend
> allow   ANY     ^conduit3\.unidata\.ucar\.edu   .*      ^data2/TIGGE_resend
> allow   ANY     ^atm\.geo\.nsf\.gov     .*      ^data2/TIGGE_resend
> allow   ANY     ^atm\.cise-nsf\.gov     .*      ^data2/TIGGE_resend
> allow   ANY     ^f5\.aos\.wisc\.edu     .*      ^data2/TIGGE_resend
> allow   ANY     ^idd\.aos\.wisc\.edu    .*      ^data2/TIGGE_resend
> allow   ANY     ^flood\.atmos\.uiuc\.edu        .*      ^data2/TIGGE_resend
> allow   ANY     ^(idd-ingest|iddrs3)\.meteo\.psu\.edu   .*
> # TIGGE hosts
> #   NCAR
> ALLOW   ANY     ^dataportal\.ucar\.edu  .*
> ALLOW   ANY     ^datagrid\.ucar\.edu    .*
> #   ECMWF
> ALLOW   ANY     ^193\.61\.196\.74       .*
> #TEST from ldmex
> ALLOW   ANY     ^140\.90\.193\.99       .*
> #TEST from node6
> ALLOW   ANY     ^192\.168\.0\.106       .*

Question:

- are there other ALLOWs in the ~ldm/etc/ldmd.conf file(s), or is
  this the complete list?

As far as entries in your list that are archaic, the following entries can
be removed as the machines or subnets no longer exist:

allow   ANY     ^atm\.geo\.nsf\.gov     .*      ^data2/TIGGE_resend
allow   ANY     ^f5\.aos\.wisc\.edu     .*      ^data2/TIGGE_resend

The other ALLOW entries are valid, but the entry for Penn State should
have the 'not' clause in its ALLOW:

allow   ANY     ^(idd-ingest|iddrs3)\.meteo\.psu\.edu   .*   ^data2/TIGGE_resend

Unless there are other ALLOWs that are not shown in the list above,
the list (which includes some small changes that are useful) would be:

# CONDUIT hosts
ALLOW   ANY     ^conduit(|[0-9])\.unidata\.ucar\.edu\.?$        .*      
^data2/TIGGE_resend
ALLOW   ANY     ^atm\.cise-nsf\.gov\.?$ .*      ^data2/TIGGE_resend
ALLOW   ANY     ^idd\.aos\.wisc\.edu\.?$        .*      ^data2/TIGGE_resend
ALLOW   ANY     ^flood\.atmos\.uiuc\.edu\.?$    .*      ^data2/TIGGE_resend
ALLOW   ANY     ^(idd-ingest|iddrs3)\.meteo\.psu\.edu\.?$       .*      
^data2/TIGGE_resend

# TIGGE hosts
# NCAR
ALLOW   ANY     ^dataportal\.ucar\.edu\.?$      .*
ALLOW   ANY     ^datagrid\.ucar\.edu\.?$        .*

# ECMWF
ALLOW   ANY     ^193\.61\.196\.74\.?$   .*

# TEST from ldmex
ALLOW   ANY     ^140\.90\.193\.99\.?$   .*

# TEST from node6
ALLOW   ANY     ^192\.168\.0\.106\.?$   .*

re:
> Great. Next question. Currently the LDM queue is set at 6G for
> operations. We have known that we need to up that threshold and have the
> ability on these new systems to do that. But, since I am not planning on
> taking LDM training until this fall :) could you provide some insight on
> how to best optimize what that number should be.

As far as the size of the queue is concerned, this is a great question that
has no concrete answer, only guidelines.  The best thing to do is to
make the queue as large as possible while still being enough smaller than 
physical
RAM so that the OS does not need to swap to disk.  A good rule of thumb
is to target the queue size to be big enough to hold at least 1 hour of
data plus 20%.  Even better, assuming that there is enough RAM, is to
create a queue that holds 2 hours of data plus some buffer.  Recent
high water marks for CONDUIT volume is on the order of 5.3 GB/hr, but this
seems to be lower than it has been in the past (I recall over 6 GB/hr).
I think we need to plan for the volume to double in the not too distant
future, so a volume of 10-12 GB/hr would not be unreasonable.  Using
the 2 hour target for queue residency time would result in a queue
that is between 20-24 GB in size ** assuming ** that there is enough memory
left over to run all routines without needing to swap to disk.

Question:

- how much RAM will the new machines have?

  If there is 30 GB, I would size make the queue 20 GB.

We can go into other issues regarding LDM queue tuning when you 
attend the training workshop in the fall.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: HGF-659268
Department: Support CONDUIT
Priority: Normal
Status: Closed