Reference: network issues on conduit feed
There is the distinct possibility that if your queue isn't large enough,
the queue for that machine is 1.8gig which from my
understanding 'should' be large enough
you could have a problem in processing all the data before it gets
overwritten in your queue if you didn't take the increased data volume
the machine that was receiving conduit (I have since
canceled the feed due to the latencies) reported
CPU usage at the time of extreme latency as:
ldm@unidata3:~/util/models/mrf201$ uptime
16:57:33 up 3 days, 23:22, 2 users, load average: 0.08, 0.14, 0.10
Your HRS latency shows the hit in latency starting today:
the interesting thing in noting latency is that the IDD topology
map showed very high latencies for both myself and PA
which run through abilene (at least mine does) ...
http://weather.bgsu.edu/files/jweber/oct_10_2005_2054z_IDD.gif
also note the feed from TAMU had lower latencies
but were still 'up there'
additionally if you compare the latencies from all
three sources together:
http://weather.bgsu.edu/files/jweber/conduit.gif
that TAMU was fairly consistent yet slow...
however IDD and ATM had huge spikes.
this would seem to indicate that the
downstream machine (unidata3) was
not the problem...
with that in mind I examined the
'abilene weather map' to see what it
reported and as is shown below
it was very busy.
http://weather.bgsu.edu/files/jweber/abilene.png
so... is it possible that the hops through
abilene caused the majority of the problem,
and that in fact that route is too overloaded
to handle the additional large feeds?
unidata3 was basically sitting there waiting
for the data with a queue that should be
large enough... the routes from boulder
to both ohio and PA according to the
topology map were slow... and extreme
differences were demonstrated between
the IDD ATM and TAMU feeds...
I'm certainly not a network expert but there
does seem to be some logic there.