[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20000825: ldm feed



Art and Bill,

Wednesday and Thursday of this week, the route out of UCAR (motherlode)
was using commodity internet instead of VBNS/Abilene. That router problem was
corrected yesterday afternoon.

This agrees with what Art mentioned.

Steve Chiswell




>From: "Arthur A. Person" <address@hidden>
>Organization: UCAR/Unidata
>Keywords: 200008251906.e7PJ6DN13646

>Bill,
>
>On Fri, 25 Aug 2000 address@hidden wrote:
>
>> Thank you very much.  Pardon me, as I am not very good either with network
>> problems or operational met. jargon, but, FWIW, the last surface data we got
>> today was 13Z.  Yesterday we got up to 15Z, then nothing 'til I went home at
>  5:30.
>
>I think yesterday's problem was some sort of trouble coming out of NCAR.
>Today looks pretty snappy into our system (only a minute or two delay).
>
>> The MCIDAS feed seems to come in all right...we have the 18Z radar composite
> , for
>> example, 17Z VIS and IR.
>
>Yesterday's problem only affected data DDPLUS and HRS type stuff, so
>MCIDAS, etc... was okay.  That's a bit odd for today, however, since we
>seem to be getting things fine today (unless it just improved in the last
>few hours).
>
>> Oh yeah, we're back in school, and that frequently impacts our data response
> .  In the
>> past, though, outgoing traceroutes have disclosed the problem.  Of course, t
> here's 
>> always something new.
>
>I checked with our network folks... they say there are not network
>problems here at the moment.  They tracerouted your site and say the
>responses on this end are nominal but noticed a slight tendency for longer
>times at more.net (your ISP?).  Could you check with your network people
>or ISP and see if they are having any problems?  It did a bunch of pings
>to nimus and cirrus and they vary a lot in lost packets ranging from 0% to
>about 25%.  It may turn out simply to be a slow pipe somewhere between
>here and there which will come down to beating on an ISP somewhere to fix
>it.
>
>FYI, here's the traceroute I did:
>
>navier# traceroute cirrus.smsu.edu
>traceroute to cirrus.smsu.edu (146.7.97.225), 30 hops max, 40 byte packets
> 1  128.118.28.1 (128.118.28.1)  1.982 ms  1.787 ms  1.722 ms
> 2  dkbln-1-fre2-rtvlan.ems.psu.edu (128.118.64.1)  1.052 ms  0.586 ms
>0.556 ms
> 3  Willard1-ATM8-0.2.gw.psu.edu (172.28.20.1)  0.942 ms  1.141 ms  0.866
>ms
> 4  Telecom2-ATM5-0-0.1.gw.psu.edu (128.118.44.5)  3.376 ms  2.597 ms
>3.096 ms
> 5  nss5-penn-state-h10-0.psc.net (192.88.115.89)  6.445 ms  6.477 ms
>6.031 ms
> 6  12.127.244.61 (12.127.244.61)  36.457 ms  22.728 ms  23.611 ms
> 7  gbr2-a30s1.n54ny.ip.att.net (12.127.0.10)  21.554 ms *  21.765 ms
> 8  gbr3-p00.n54ny.ip.att.net (12.122.5.246)  19.399 ms  23.064 ms  21.629
>ms
> 9  gbr3-p30.wswdc.ip.att.net (12.122.2.166)  30.971 ms * *
>10  gbr3-p40.sl9mo.ip.att.net (12.122.2.82)  53.122 ms  53.576 ms *
>11  gbr2-p60.sl9mo.ip.att.net (12.122.1.245)  45.871 ms  45.968 ms  46.810
>ms
>12  ar1-a3120s2.sl9mo.ip.att.net (12.127.4.9)  47.081 ms  50.221 ms
>47.150 ms
>13  12.127.112.202 (12.127.112.202)  68.209 ms  61.941 ms  65.426 ms
>14  sp-r12-01-atm0-0-103.mo.more.net (150.199.7.97)  59.658 ms  59.172 ms
>63.572 ms
>15  SMSU-ATM2-0.gw.more.net (150.199.235.250)  66.520 ms  59.433 ms
>53.289 ms
>16  London.smsu.edu (146.7.4.6)  54.785 ms  52.749 ms  59.511 ms
>17  * cirrus.smsu.edu (146.7.97.225)  62.154 ms  71.193 ms
>
>
>I'll continue to monitor and see if I can come up with any other ideas.
>
>Now that I'm thinking about it... have you checked your machine to make
>sure there are no run-away processes on it that might be slowing it down
>and preventing it from keeping up with the data?  That's worth checking if
>you haven't.
>
>Let me know if I can help further.
>
>
>                                           Art.
>
>Arthur A. Person
>Research Assistant, System Administrator
>Penn State Department of Meteorology
>email:  address@hidden, phone:  814-863-1563
>