[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ldm/idd sparse datae



Bill,

On Wed, 27 Dec 2000 address@hidden wrote:

> I'll start with apologies for the bother and putting this off.  Sorry, BUT
> our data feed continues to be incredibly crappy.  For example:
>
> We have not received sufficient sounding data to make a 12Z map since 
> November;
> Typically, during daylight hours, including weekends it seems, we get perhaps 
> the
> 12Z, 16Z and 18Z surface observations.  We can go a week at a time without
> some of the longer-term gridded products (72, 120, 144 hours).  I have delayed
> contacting because I thought, perhaps, that end-of-semester utilization on
> campus here might have saturated our internet pipe...Our utilization can be 
> seen
> at http://networking.smsu.edu/mrtg/html/Moscow.2.0.html...for a quick and
> dirty of the day's surface observations, the Mcidas meteorogram at
> http://cirrus.smsu.edu/home/mcidas/data/mc/METG.gif  provides a view.
>
> >From my end, traceroutes look good, our utilization looks fine.  I noted the
> discussion a month ago about latencies...although there were not hard and fast
> numbers mentioned in that discussion about what would be a "good" or "bad"
> response time from an ldmping, ours don't look awful:
> cirrus:/home/ldm> ldmping navierldm.meteo.psu.edu
> Dec 27 19:37:20      State    Elapsed Port   Remote_Host           rpc_stat
> Dec 27 19:37:20 RESPONDING   0.307533  388   navierldm.meteo.psu.edu
> Dec 27 19:37:45 RESPONDING   0.043275  388   navierldm.meteo.psu.edu
> Dec 27 19:38:10 RESPONDING   0.153390  388   navierldm.meteo.psu.edu
> Dec 27 19:38:35 RESPONDING   0.042435  388   navierldm.meteo.psu.edu
> Dec 27 19:39:00 RESPONDING   0.041683  388   navierldm.meteo.psu.edu
>
> I did go looking at unidata's latency page and noted first, that for our feed
> site the listing was navier.meteo.psu.edu rather than navierldm...this changed
> some time back (to navierldm), but we (cirrus.smsu.edu) were not even listed
> in the feeds (of course we hadn't much data that day).

navier and navierldm are two different ethernet ports on the same machine.
I feed data into our ldm via navier and our downsteam sites feed from us
on navierldm.  This should be irrelevant to performance and transmission
issues.

> Now that we have semester break, I'd really like to get this data problem
> ironed out, but I'm afraid I'll have to ask you all to point me in the right
> direction.  I need a suggestion on where to start.

As far as I know, things have been pretty good here.  One of our
forecasters mentioned to me today that our data seems to have been very
complete lately.  A few weeks back I split out our NMC2 input stream from
our WMO input stream on motherlode because it seemed the number of
products feeding serially was too large and I saw frequent backlogs.
However, the NMC2 feed has been down for awhile so this wouldn't have been
a recent issue, and I think splitting the streams fixed that problem.

The only other thing I can think of is that the network connection from
here to your site breaks down under load.  It may appear snappy to a ping
or traceroute, but when we try to push data down the pipe, perhaps it
overloads or is throttled somewhere producing a backlog.  Perhaps we can
monitor the reception sometime with "ldmadmin watch" and compare
latencies, or compare statistics files to see what's actually happening.
We might also try anonymous ftp's between sites on large files to see how
the performance is in raw throughput.  There obviously must be a problem
somewhere.

I'm out of the office through much of next week, but will try to keep up
with email and if you wish to try some tests, I may be able to set a time
to do that.


                                          Art.

Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email:  address@hidden, phone:  814-863-1563