[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 20010813: LDM latency issues



Unidata Support wrote:
> 
> ------- Forwarded Message
> 
> >To: <address@hidden>
> >cc: Atmos Support <address@hidden>
> >From: "Kevin R. Tyle" <address@hidden>
> >Subject: LDM latency issues
> >Organization: SUNY Albany
> >Keywords: 200108131833.f7DIXr109370 LDM latency
> 
> Hi,
> 
> In the last couple of weeks we have seen latency crop
> up when feeding from snow.cit.cornell.edu, our primary site.
> The following entries from our ldmd.conf show what we request from
> Cornell:
> 
> request DDPLUS|IDS|HDS|FSL2  ^[^a-z]  snow.cit.cornell.edu
> request MCIDAS          ".*"     132.236.186.15
> 
> Typically, data comes in with close to no latency for many hours,
> but then latency fairly quickly pops up to a 40-60 minute
> delay.
> 
> Our failover site, ldm.meteo.psu.edu, does not exhibit this problem.
> However, our connection to Cornell "should" be faster than the failover
> since we both use Applied Theory as our ISP:
> 
> traceroute to snow.cit.cornell.edu (132.236.186.15), 30 hops max, 40 byte
> packets
>  1  be102 (169.226.4.1)  2 ms  1 ms  1 ms
>  2  169.226.13.1 (169.226.13.1)  2 ms  2 ms  2 ms
>  3  at-gw3-alb-1-0-T3.appliedtheory.net (169.130.23.5)  3 ms  3 ms  3 ms
>  4  at-gsr1-alb-0-0-OC3.appliedtheory.net (169.130.3.37)  2 ms  3 ms  3 ms
>  5  at-gsr1-syr-3-0-OC12.appliedtheory.net (169.130.3.42)  5 ms  5 ms  6
> ms
>  6  at-gsr2-syr-1-2-cornelluniv-1.appliedtheory.net (169.130.253.6)  6 ms
> 6 ms  7 ms
>  7  ccc1-8540-vl7.cit.cornell.edu (128.253.222.137)  7 ms  8 ms  7 ms
>  8  snow.cit.cornell.edu (132.236.186.15)  7 ms  10 ms  8 ms
> 
> Pings and traceroutes to Cornell show no noticeable difference when
> we experience latency--packet loss is virtually nil and ping times
> are on the order of 8-10 ms.
> 
> Based on notifyme's and a look at the FOS routing web page at Unidata,
> it looks like other sites that feed from Cornell are not experiencing
> the problems that we are.  What I am doing now is to split off the
> HDS feed, getting that from PSU.  Once I do that, latency for DDPLUS/IDS
> drops back to near zero.
> 
> It is interesting that this problem has cropped up in the last couple of
> weeks.  Prior to this, the last time we resorted to the split
> DDPLUS-IDS/HDS feeds was early in 2001, when the University was
> experiencing near-saturation of its T3 line.
> 
> I am just wondering if there is anything further we can do to
> investigate what causes the bottleneck between UAlbany and Cornell.
> Students return here in less than two weeks, so I expect things will get
> worse before they get better.
> 
> Thanks,
> 
> Kevin
> 
> ______________________________________________________________________
> Kevin Tyle, Systems Administrator               **********************
> Dept. of Earth & Atmospheric Sciences           address@hidden
> University at Albany, ES-235                    518-442-4571 (voice)
> 1400 Washington Avenue                          518-442-5825 (fax)
> Albany, NY 12222                                **********************
> ______________________________________________________________________
> 
> ------- End of Forwarded Message

Hello Kevin,

First, an apology.  I'm sorry for the delay in responding.  I'm afraid I
misplaced your email in my mailbox.

Network congestion in the northeast is a known problem.  Although your
traceroute looks good, it is not necessarily representative of network
quality over time, nor is it reflective of trying to send a large
product over the network.

We have a web page that discusses network problems - see
http://www.unidata.ucar.edu/packages/ldm/troubleshooting/networkTrouble.html. 
It gives some tips for investigation of these problems.  

Unfortunately, once it's been determined that there is a bottleneck,
there's not a lot we can do about it.  We can try feeding your site from
somewhere else.  If your failover site is consistently better, perhaps
we should use that site.  For that matter, if things are working for you
now with the split feed, go with that.  If the bottleneck is actually on
your campus, then maybe this is the best solution as the data is spread
across two routes instead of one.

I'm cc'ing this message to Jeff Weber, who oversees the IDD topology. 
He may have better ideas about who best to use as a feed for you.

I hope this is helpful.  Again, my apologies for not responding sooner.

Anne
-- 
***************************************************
Anne Wilson                     UCAR Unidata Program            
address@hidden                 P.O. Box 3000
                                  Boulder, CO  80307
----------------------------------------------------
Unidata WWW server       http://www.unidata.ucar.edu/
****************************************************