[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 20011008: IDD latencies at PSU (cont.)



"Arthur A. Person" wrote:
> 
> Anne,
> 
> Well, the NMC2 data really stands out here at nearly 1/2 GB/hour max.
> Perhaps it's just a bandwidth issue with the degraded network link?
> There are 100,000 more products transmitted as well and all the NMC2 stuff
> tends to be transmitted in clumps over shorter time intervals so the
> bandwidth needed would be considerably larger than for NEXRAD stuff which
> is transmitted more evenly over time.
> 

Comments about this below.

> 
> > Are
> > these all via single request lines, or are you using two request lines
> > to motherlode?  Sometimes people divide up requests into two, one that
> > using an IP address instead of a name.
> 
> I use two request lines, but to the same name, motherlode.ucar.edu.  So,
> it ends up on one stream.  As an aside, I originally put my request on one
> line last spring when I switched to our new machine, but it wouldn't get
> the NMC2 data.  Something peculiar with having to match the allow lines on
> motherlode before it would work.
> 

Yes, I remember something about this now.


> > So, this message is lengthy *and* inconclusive!  Would you please show
> > me ldmpings from ldm.meteo to all your upstream feeds?
> 
> Oct 16 18:08:07      State    Elapsed Port   Remote_Host
> rpc_stat
> Oct 16 18:08:07 RESPONDING   0.086736  388   motherlode.ucar.edu
> Oct 16 18:08:12 RESPONDING   0.042036  388   motherlode.ucar.edu
> Oct 16 18:08:17 RESPONDING   0.037876  388   motherlode.ucar.edu
> Oct 16 18:08:22 RESPONDING   0.038069  388   motherlode.ucar.edu
> Oct 16 18:08:27 RESPONDING   0.036611  388   motherlode.ucar.edu
> Oct 16 18:08:32 RESPONDING   0.042578  388   motherlode.ucar.edu
> Oct 16 18:08:37 RESPONDING   0.042465  388   motherlode.ucar.edu
> Oct 16 18:08:42 RESPONDING   0.036753  388   motherlode.ucar.edu
> Oct 16 18:08:48 RESPONDING   0.616710  388   motherlode.ucar.edu
> Oct 16 18:08:53 RESPONDING   0.036910  388   motherlode.ucar.edu
> Oct 16 18:08:58 RESPONDING   0.036563  388   motherlode.ucar.edu
> Oct 16 18:09:03 RESPONDING   0.036435  388   motherlode.ucar.edu
> Oct 16 18:09:08 RESPONDING   0.036626  388   motherlode.ucar.edu
> Oct 16 18:09:13 RESPONDING   0.039615  388   motherlode.ucar.edu
> Oct 16 18:09:18 RESPONDING   0.039043  388   motherlode.ucar.edu
> Oct 16 18:09:23 RESPONDING   0.036243  388   motherlode.ucar.edu
> Oct 16 18:09:28 RESPONDING   0.039696  388   motherlode.ucar.edu
> 
> Oct 16 18:22:17      State    Elapsed Port   Remote_Host
> rpc_stat
> Oct 16 18:22:17 RESPONDING   0.067776  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:22 RESPONDING   0.038185  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:27 RESPONDING   0.051924  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:32 RESPONDING   0.028152  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:38 RESPONDING   0.050676  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:43 RESPONDING   0.061622  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:48 RESPONDING   0.024893  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:53 RESPONDING   0.026718  388   sunshine.ssec.wisc.edu
> Oct 16 18:22:58 RESPONDING   0.025893  388   sunshine.ssec.wisc.edu
> Oct 16 18:23:03 RESPONDING   0.048494  388   sunshine.ssec.wisc.edu
> Oct 16 18:23:08 RESPONDING   0.037825  388   sunshine.ssec.wisc.edu
> Oct 16 18:23:13 RESPONDING   0.025505  388   sunshine.ssec.wisc.edu
> Oct 16 18:23:18 RESPONDING   0.025088  388   sunshine.ssec.wisc.edu
> Oct 16 18:23:23 RESPONDING   0.056023  388   sunshine.ssec.wisc.edu
> 
> Oct 16 18:23:54      State    Elapsed Port   Remote_Host
> rpc_stat
> Oct 16 18:23:54 RESPONDING   0.804434  388   129.15.193.80
> Oct 16 18:23:59 RESPONDING   0.038246  388   129.15.193.80
> Oct 16 18:24:04 RESPONDING   0.032854  388   129.15.193.80
> Oct 16 18:24:09 RESPONDING   0.031860  388   129.15.193.80
> Oct 16 18:24:15 RESPONDING   0.032607  388   129.15.193.80
> Oct 16 18:24:20 RESPONDING   0.034292  388   129.15.193.80
> Oct 16 18:24:25 RESPONDING   0.040832  388   129.15.193.80
> Oct 16 18:24:30 RESPONDING   0.032638  388   129.15.193.80
> Oct 16 18:24:35 RESPONDING   0.400735  388   129.15.193.80
> Oct 16 18:24:40 RESPONDING   0.032062  388   129.15.193.80
> Oct 16 18:24:45 RESPONDING   0.078552  388   129.15.193.80
> Oct 16 18:24:50 RESPONDING   0.031965  388   129.15.193.80
> Oct 16 18:24:55 RESPONDING   0.031809  388   129.15.193.80
> Oct 16 18:25:00 RESPONDING   0.033331  388   129.15.193.80
> Oct 16 18:25:05 RESPONDING   0.032171  388   129.15.193.80
> 

Taking the min and max times from these 3 data points I get:

site            mint    maxt    maxp    minp
----            ----    ----    ----    ----
motherlode      .036    .616*    28      1.6
motherlode      .036    .043     28     23.3 
ssec            .024    .056     42     17.9
OU              .031    .400*    32      2.5
OU              .031    .079     32     12.7

where   mint = min ldmping time
        maxt = max lmdping time
        maxp = max number of products per second that can be transferred
        minp = min number of products per second that can be transferred

*these values could be outliers.  I wonder how often this happens?  For
this reason I added lines that ignored thes values. 

So, including the .616 time to motherlode, the connection to motherlode
is definately the worst.  It seems very possible that over time things
could get further and further behind, depending on how often that
maximum time occurs and how bad it gets.   If the burst of NMC2 products
coincided with the bad connection time, that would contribute to the
problem. 

From the stats I sent yesterday, if the max NMC3 feed was 497Mb in an
hour, and the average product size is 25K, that works out to 5.5
prods/sec.  With ldm.meteo only requesting NMC2, ignoring SPARE for the
moment, that's the rate that would need to get through. 

I'd like to see these ldmpings run over time, to see how often these bad
times occur.  You could have it log to a file.  Is this of interest to
you?


> > And, please send
> > me ldm.meteo's request lines so I can see exactly what you're asking for
> > now and in what manner.
> 
> Current request lines:
> 
> request WMO|FSL2|DIFAX ".*" nutone.ems.psu.edu
> request NMC2|SPARE ".*" motherlode.ucar.edu
> request NEXRAD|FNEXRAD ".*" sunshine.ssec.wisc.edu
> request MCIDAS ".*" unidata.ssec.wisc.edu
> request NLDN ".*" striker.atmos.albany.edu
> request NEXRD2 ".*" 129.15.193.80
> 
> 

I have a few more tests to propose.  First, I'm interested to know about
the impact of the  NMC2|SPARE feed.  Although it would mean missing
data, could you turn off that feed to ldm.meteo and turn back on the
WMO|FSL2|DIFAX feed from motherlode?  I wonder if the problem would
still appear.  And, Jeff said that nutone is now getting the NMC2|SPARE
feed - are you getting latencies from there?  I have a notifyme running
to there now, but no NMC2 products are coming in.

Anne
-- 
***************************************************
Anne Wilson                     UCAR Unidata Program            
address@hidden                 P.O. Box 3000
                                  Boulder, CO  80307
----------------------------------------------------
Unidata WWW server       http://www.unidata.ucar.edu/
****************************************************