Greetings,
I see that CONDUIT latencies are spiking at the moment. I assume it is related
to the firewall upgrade in progress? Does hope spring eternal that this
firewall upgrade will help with the CONDUIT struggles?
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu
daryl
--
/**
* daryl herzmann
* Systems Analyst III -- Iowa Environmental Mesonet
* https://mesonet.agron.iastate.edu
*/
________________________________________
From: conduit <conduit-bounces@xxxxxxxxxxxxxxxx> on behalf of Pete Pokrandt
<poker@xxxxxxxxxxxx>
Sent: Tuesday, September 3, 2019 1:47 PM
To: Person, Arthur A.; Derek VanPelt - NOAA Affiliate
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx; _NCEP.List.pmb-dataflow; Mike
Zuranski; Dustin Sheffler - NOAA Federal; support-conduit@xxxxxxxxxxxxxxxx
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Conduit lags are larger since
~00UTC 8/25
I wonder if there is any possibility of building in a small delay between
ingesting each forecast hour's data into the CONDUIT data stream. I don't know
what the process looks like of ingesting it, but it seems like maybe if we
could extend the time through which the data goes out, so it doesn't all get
dumped in at once, the ldm might be able to keep up and the lags would be
smaller. While it would delay the reception of the farther out forecast hours,
my suspicion is that people are most urgently interested in the shorter out
forecasts (96h or less?) and getting the latter forecast times a bit later, but
complete, would be preferable to getting them sooner but incomplete..
Perhaps a 10 second delay between each forecast hour of the GFS and GEFS might
do it?
Just a thought..
Pete
<http://www.weather.com/tv/shows/wx-geeks/video/the-incredible-shrinking-cold-pool>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx
________________________________
From: Person, Arthur A. <aap1@xxxxxxx>
Sent: Tuesday, September 3, 2019 12:33 PM
To: Pete Pokrandt <poker@xxxxxxxxxxxx>; Derek VanPelt - NOAA Affiliate
<derek.vanpelt@xxxxxxxx>
Cc: Kevin Goebbert <Kevin.Goebbert@xxxxxxxxx>; conduit@xxxxxxxxxxxxxxxx
<conduit@xxxxxxxxxxxxxxxx>; _NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx>; Mike Zuranski <zuranski@xxxxxxxxxxxxxxx>;
Dustin Sheffler - NOAA Federal <dustin.sheffler@xxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx <support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Conduit lags are larger since
~00UTC 8/25
Derek,
Prior to when these latency issues first started, maybe this past February, I
used to run no split, just one connection to NCEP, and had superb throughput.
Then, something changed and the ability of the network to pass high volume data
without the use of parallel connections degraded and has not been as good
since. With the use of multiple connections, thoughput recently has generally
been pretty good, at least until the past week or so. I've been using a
20-connection split for awhile now to ensure data throughput, but agree with
Pete that I think we're at the point of diminishing returns on splits. This
morning's CONDUIT data is running about 2000 seconds behind with a 20-way
split... what else can we do? Something is causing the network to no longer be
able to handle the volume of data available in the CONDUIT feed.
Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx, phone: 814-863-1563<callto:814-863-1563>
________________________________
From: conduit <conduit-bounces@xxxxxxxxxxxxxxxx> on behalf of Pete Pokrandt
<poker@xxxxxxxxxxxx>
Sent: Tuesday, September 3, 2019 12:17 PM
To: Derek VanPelt - NOAA Affiliate <derek.vanpelt@xxxxxxxx>
Cc: Kevin Goebbert <Kevin.Goebbert@xxxxxxxxx>; conduit@xxxxxxxxxxxxxxxx
<conduit@xxxxxxxxxxxxxxxx>; _NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx>; Mike Zuranski <zuranski@xxxxxxxxxxxxxxx>;
Dustin Sheffler - NOAA Federal <dustin.sheffler@xxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx <support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Conduit lags are larger since
~00UTC 8/25
Derek,
This is similar to the results that had previously been found - that splitting
the feed further generally results in lower latencies.
I believe it was Art Person that had noted a huge change back in April, 2019
going from a 2-way split feed to a 20-way split.
I suspect we're going to hit a point of diminishing returns at some point, but
that's just a guess, and I don't know where that point is.
The latencies are larger so far for today's 12 UTC cycle, but manageable so far
- < 1000 seconds, but rising. Will continue to monitor.
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454622901&sdata=OFIfFiWtg9uP1AC%2FXV9uSlLLxfaFrXWQRh1LuUygTaE%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx
________________________________
From: Derek VanPelt - NOAA Affiliate <derek.vanpelt@xxxxxxxx>
Sent: Tuesday, September 3, 2019 9:10 AM
To: Pete Pokrandt <poker@xxxxxxxxxxxx>
Cc: Kevin Goebbert <Kevin.Goebbert@xxxxxxxxx>; Mike Zuranski
<zuranski@xxxxxxxxxxxxxxx>; conduit@xxxxxxxxxxxxxxxx
<conduit@xxxxxxxxxxxxxxxx>; _NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx>; Dustin Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx>; support-conduit@xxxxxxxxxxxxxxxx
<support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Conduit lags are larger since
~00UTC 8/25
HI Pete,
I am glad you got positive results by splitting your pull further. I was
looking at your chart before I saw this email and was trying to figure out what
had changed.
Is this different from the results you had previously found? I know that
there had been experimentation with the number of slits before, so I am curious
if this is new behaviour, or reconfirming your understanding.
Thanks,
Derek
On Tue, Sep 3, 2019 at 1:33 AM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Judging from tonight's 00 UTC run, it looks like the 20 way split does make a
big difference. Latencies were < 200-300s and no lost data. I'll keep it like
that and see how tomorrow's 12 UTC run suite works out.
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454622901&sdata=OFIfFiWtg9uP1AC%2FXV9uSlLLxfaFrXWQRh1LuUygTaE%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Monday, September 2, 2019 9:27 PM
To: Derek VanPelt - NOAA Affiliate
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Cc: Kevin Goebbert <Kevin.Goebbert@xxxxxxxxx<mailto:Kevin.Goebbert@xxxxxxxxx>>;
Mike Zuranski <zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>>;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
<conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>>;
_NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>>;
Dustin Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
<support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Conduit lags are larger since
~00UTC 8/25
Just an FYI - CONDUIT latencies have been much larger starting with the
2019/09/01/12 UTC run. We lost data for the 12UTC 02 and 18 UTC 02 runs. I
expect that we will continue to lose data if these latencies continue to be so
large.
I can try a 20-way split feed and see if that helps - I'm currently on a 10
way split feed from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454632895&sdata=OwOSKV%2BnhbyqxEOlNWE%2F4H1MErAOPZwccaq18dNd4ms%3D&reserved=0>.
[cid:16cf7730eb9fe4a965a3]
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454632895&sdata=qYr4SRAIBDMFJw3Tv%2BHCC2PPHHaWswbS4lAH5hIEdb4%3D&reserved=0>
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454642890&sdata=cAPan4ge%2FCz2gv8RiXSWL%2FMZeWCZb4A2NnG1hnqP4Io%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Derek VanPelt - NOAA Affiliate
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Tuesday, August 27, 2019 3:50 PM
To: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Cc: Dustin Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>; Kevin Goebbert
<Kevin.Goebbert@xxxxxxxxx<mailto:Kevin.Goebbert@xxxxxxxxx>>;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
<conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>>;
_NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>>;
Tyle, Kevin R <ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>>; Mike Zuranski
<zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
<support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>>
Subject: Re: [Ncep.list.pmb-dataflow] Conduit lags are larger since ~00UTC 8/25
Hi Pete,
The increased latencies you saw were based on some movement to the balance of
applications in Boulder and College Park yesterday afternoon in anticipation of
increased load during CWD. We are also able to see some spikes in the
throughput during high volume periods since the change, and are taking action
to try to minimize it.
We have just rebalanced again - moving one of our applications back to Boulder
- which should give more bandwidth to Conduit (and MRMS) which are both running
in College Park.
Please let us know if you see better performance overnight, and we will also
check your metrics on the provided graph.
Thank you,
Derek
On Tue, Aug 27, 2019 at 1:20 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:
Not sure if something changed in the last day or two, but the peak lags on the
CONDUIT data feed have been somewhat larger than usual - I noticed it starting
with the 12 UTC August 25 run cycle where the peak latencies rose to ~1000
seconds. The past few run cycles, the peak lags have been even higher, up over
2500s for the 12 UTC 26 run and the 12 UTC 27 run.
I don't think these are actually causing us to lose data yet - that happens
when the latencies exceed 3600s - but they're close, and might be impacting
downstream sites.
Mainly just FYI to let you know. Latencies had been peaking out closer to
300-400s the past several weeks.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454642890&sdata=NAew959Erbjaqm8SZIUG7nLsNdUaPLAkPQTI8TlN0P0%3D&reserved=0>
[cid:16cf7730eb9d4cd60152]
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454652883&sdata=Rnmyxypr%2FomnoIgZewcTVfeW4ZGEHXZfGRslUPrHkik%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Dustin Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>
Sent: Friday, July 19, 2019 2:17 PM
To: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Cc: Tony Salemi - NOAA Federal
<tony.salemi@xxxxxxxx<mailto:tony.salemi@xxxxxxxx>>; Kevin Goebbert
<Kevin.Goebbert@xxxxxxxxx<mailto:Kevin.Goebbert@xxxxxxxxx>>;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
<conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>>;
_NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>>;
Tyle, Kevin R <ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>>; Mike Zuranski
<zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
<support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>>
Subject: Re: [Ncep.list.pmb-dataflow] [conduit] Large CONDUIT lags starting
with 18 UTC July 1 2019 cycle
Thanks for letting us know Pete. Yes most apps with the exception of NOMADS are
still in Boulder and will remain that way for at least the next week. We're
hoping to be much less impactful to CONDUIT the next time we bring them all
back to College Park by balancing the load better than we did this time.
-Dustin
On Fri, Jul 19, 2019 at 6:39 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:
Just to follow up - so far, latencies have been good. Since the glitch around
18 UTC yesterday, no more that 300 or 400s which is pretty typical for the
CONDUIT feed on a normal day.
I am assuming that most services are still failed over to Boulder? If so, it
will be interesting to see what happens when they all are moved back to College
Park..
Thanks, and have a good weekend all!
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454652883&sdata=Rnmyxypr%2FomnoIgZewcTVfeW4ZGEHXZfGRslUPrHkik%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Tony Salemi - NOAA Federal
<tony.salemi@xxxxxxxx<mailto:tony.salemi@xxxxxxxx>>
Sent: Friday, July 19, 2019 5:17 AM
To: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Cc: Tyle, Kevin R <ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>>; Anne Myckow -
NOAA Affiliate <anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>; Kevin
Goebbert <Kevin.Goebbert@xxxxxxxxx<mailto:Kevin.Goebbert@xxxxxxxxx>>;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
<conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>>;
_NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>>; Mike
Zuranski <zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
<support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>>
Subject: Re: [Ncep.list.pmb-dataflow] [conduit] Large CONDUIT lags starting
with 18 UTC July 1 2019 cycle
Pete,
Glad to hear the changes we made yesterday helped restore your ability to
acquire data.
On Thu, Jul 18, 2019 at 6:50 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:
One more update. Shortly after I sent this email CONDUIT data started flowing
again. 15 UTC SREF and some NDFD products coming in now. Lags seem to be
dropping quickly to acceptable values.
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454662879&sdata=T0GFrx9%2BZKwLsPO7%2BAkr9YrlDdr3jlUC0CLwHFVZDpw%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Thursday, July 18, 2019 1:28 PM
To: Tyle, Kevin R; Anne Myckow - NOAA Affiliate
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Dustin Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting
with 18 UTC July 1 2019 cycle
Our CONDUIT feed hasn't seen any data at all for the last hour or so. The last
product that came in was from the 252h forecast of the GFS. At that time the
lag was up around 3600s.
20190718T173422.935745Z pqutil[18536] pqutil.c:display_watch:1189
INFO 24479 20190718163423.193677 CONDUIT 531
data/nccf/com/gfs/prod/gfs.20190718/12/gfs.t12z.pgrb2.1p00.f252
!grib2/ncep/GFS/#000/201907181200F252/TMPK/100 m HGHT! 000531
FYI.
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454662879&sdata=T0GFrx9%2BZKwLsPO7%2BAkr9YrlDdr3jlUC0CLwHFVZDpw%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Tyle, Kevin R <ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>>
Sent: Thursday, July 18, 2019 1:24 PM
To: Pete Pokrandt; Anne Myckow - NOAA Affiliate
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Dustin Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: RE: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting
with 18 UTC July 1 2019 cycle
I can confirm that our 12Z GFS receipt via CONDUIT was affected by the lags; we
ended up with a lot of missing grids beginning with forecast hour 120. Since we
feed from Pete at UWisc-MSN, not too surprising there.
First time we've missed grids in a few weeks.
--Kevin
_____________________________________________
Kevin Tyle, M.S., Manager of Departmental Computing
Dept. of Atmospheric & Environmental Sciences
University at Albany
Earth Science 228, 1400 Washington Avenue
Albany, NY 12222
Email: ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>
Phone: 518-442-4578
_____________________________________________
From: conduit
[mailto:conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>]
On Behalf Of Pete Pokrandt
Sent: Thursday, July 18, 2019 12:29 PM
To: Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Cc: Kevin Goebbert <Kevin.Goebbert@xxxxxxxxx<mailto:Kevin.Goebbert@xxxxxxxxx>>;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>>; Mike
Zuranski <zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>>; Dustin
Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting
with 18 UTC July 1 2019 cycle
Just an update -
CONDUIT latencies decreased starting with the 12 UTC run yesterday,
corresponding to the move of many other services to Boulder.
However, the 12 UTC run today (7/18) is showing much larger CONDUIT latencies
(1800s at present)
[cid:16cf7730eb84cdccc1]
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454662879&sdata=9IY%2BE2sMSuTdt8i5kiqVPqxIsHyMz72MujO63zFSze4%3D&reserved=0>
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Sent: Tuesday, July 9, 2019 8:00 AM
To: Pete Pokrandt
Cc: Dustin Sheffler - NOAA Federal; Kevin Goebbert;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Gilbert Sebenste; Person, Arthur A.;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC
July 1 2019 cycle
Hi everyone,
During our troubleshooting yesterday we found that there is another network
issue that I believe is causing the issue with CONDUIT when most things are
hosted out of College Park. Our networking team is pushing to have it remedied
this week and I'm hopeful it will fix the CONDUIT latency permanently. If it
does not we will re-engage our networking group to look into it actively again.
Thanks for your patience with this, more to come.
Anne
On Mon, Jul 8, 2019 at 11:02 PM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Thanks, Dustin. It was definitely better for the 18 UTC suite of runs.
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Dustin Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>
Sent: Monday, July 8, 2019 12:16 PM
To: Pete Pokrandt
Cc: Anne Myckow - NOAA Affiliate; Kevin Goebbert;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Gilbert Sebenste; Person, Arthur A.;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC
July 1 2019 cycle
Hi Pete,
We had to shift NOMADS temporarily to our College Park data center for work in
our Boulder data center and while doing so our network team was collecting data
to remedy the slowness issue we've seen with HTTPS, FTP, and LDM when all our
applications are in the C.P. data center. You should start seeing relief as
we've switched NOMADS back to the other data center now.
-Dustin
On Mon, Jul 8, 2019 at 4:55 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:
FYI latencies are much larger with today's 12 UTC model suite. They had been
peaking around 500-800s for the past week. Today they are up over 2000 with the
12 UTC suite.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454672871&sdata=2v5nfPx87aNdkIhWEKeGg36oc9Zvg8Jr5nshaAtdy3o%3D&reserved=0>
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Sent: Wednesday, July 3, 2019 8:17 AM
To: Pete Pokrandt
Cc: Derek VanPelt - NOAA Affiliate; Kevin Goebbert;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Gilbert Sebenste; Person, Arthur A.;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC
July 1 2019 cycle
Pete et al,
Can you tell us how the latency looks this morning and overnight?
Thanks,
Anne
On Tue, Jul 2, 2019 at 9:23 PM Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>> wrote:
Hi Pete,
We've been able to re-create the CONDUIT LDM issues with other LDMs now in NCO.
We do not know root cause but we are failing some services out of College Park
now to alleviate the traffic. You may experience slowness again tomorrow while
we troubleshoot with the whole team in office but overnight (Eastern Time
anyway) should be better.
I'm adding you and the other people with actual email addresses (rather than
the lists) to the email chain where we are keeping everyone apprised, so don't
be surprised to get another email that says OPEN: TID <lots of other text> in
the subject line - that's about this slowness.
Thanks,
Anne
On Tue, Jul 2, 2019 at 11:49 AM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Thanks, Anne.
Lag is still there on the current 12 UTC cycle FYI
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454672871&sdata=2v5nfPx87aNdkIhWEKeGg36oc9Zvg8Jr5nshaAtdy3o%3D&reserved=0>
Pete
Sent from my iPhone
On Jul 2, 2019, at 10:18 AM, Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>> wrote:
Hi Pete,
We (NCO) have fully loaded our College Park site again, where conduit lives.
I'll see if I can get the attention of our networking folks today about this
since they just installed new hardware that we believe should have increased
our network capacity.
Thanks,
Anne
On Tue, Jul 2, 2019 at 1:25 AM 'Pete Pokrandt' via _NCEP list.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:
All,
Something happened in the past day or two that has resulted in large lags (and
data loss) between
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454682865&sdata=2Z56QVgK%2FQhAJwrWAkvFfn%2F%2BiYYznmrDQkLVz8b6Q2M%3D&reserved=0>
and
idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454682865&sdata=mxYeEd13mjFZdWiVxWboE5VysPkx%2BXbC8cEFw4miFHU%3D&reserved=0>
(and Unidata too)
Based on these IDD stats, there was a bit of a lag increase with the 06 UTC
July 1 runs, a little larger with the 12 UTC runs, and then much bigger for the
18 UTC July 1 and 00 UTC July 2 runs. Any idea what might have happened or
changed? The fact that Unidata's and UW-AOS's graphs look so similar suggests
that it's something upstream of us.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454692859&sdata=VQocW43Czy1yPwu1qtwPWtAkecWKSig6MlbHVc70Aus%3D&reserved=0>
<iddstats_conduit_idd_aos_wisc_edu_20190702.gif>
Here's Unidata's graph:
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+lead.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Blead.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454692859&sdata=eghB6h9%2FPHYjf02%2B%2Fb%2Bib%2Flp8FlyMBZsxYtjDDMofL0%3D&reserved=0>
<iddstats_conduit_lead_unidata_ucar_edu_20190702.gif>
Thanks,
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Derek VanPelt - NOAA Affiliate
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Tuesday, April 23, 2019 3:40 PM
To: Pete Pokrandt
Cc: Person, Arthur A.; Gilbert Sebenste; Kevin Goebbert;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Dustin Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Hi All,
There are few things going on here.
The strongest driver on you download speeds is the presence or absence of
NOMADS in College Park. When NOMADS is in CPRK, dissemination from the entire
datacenter (including our Conduit servers which only exist in College Park) can
be effected at peak model download times. Adding to this are new rules put in
place that require the NOMADS users to all follow the top level VIP.
Previously some of our users would pull from Boulder even when the VIP pointed
to College Park. That is no longer regularly possible, as the backup server is
intentionally being blocked to traffic.
I have been asked to go back and using internal metrics and the download speeds
that have been provided in this thread (thanks!) to firmly establish the time
line, and hope to do so in the next few days, but believe the answer will be as
stated above.
As far as splitting the request into many smaller requests; it clearly is
having a positive effect. As long as you don't (and we don't) hit an upper
connection count limit, this appears to be the best way to minimize the latency
during peak download times.
More to come. Thanks for keeping this discussion alive as it has provided
light for both the Conduit download speeds, but also provides context for some
of our wide ranging issues.
Thank you,
Derek
On Tue, Apr 23, 2019 at 3:07 PM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
I'm still on the 10 way split that I've been on for quite some time, and
without my changing anything, our lags got much much better starting on Friday,
4/19 starting with the 12 UTC model sequence. I don't know if this correlated
to Unidata switching to a 20 way split or not, but that happened around the
same time.
Here are my lag plots, the first ends 04 UTC 4/20, and the second just now at
19 UTC 4/23. Note the Y axis on the first plot goes to ~3600 seconds, but on
the second plot, only to ~100 seconds.
<iddstats_CONDUIT_idd_aos_wisc_edu_ending_20190423_1900UTC.gif>
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Tuesday, April 23, 2019 1:49 PM
To: Pete Pokrandt; Gilbert Sebenste
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin
Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
I switched our test system iddrs2a feeding from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454702857&sdata=7O1VFDSKzDyKRjTyfD1lKGaJYh9F3zFdwpGP6duOmJs%3D&reserved=0>
back to a 2-way split (from a 20-way split) yesterday to see how it would hold
up:
<pastedImage.png>
While not as good as prior to February, it wasn't terrible, at least until this
morning. Looks like the 20-way split may be the solution going forward if this
is the "new normal" for network performance.
Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Saturday, April 20, 2019 12:29 AM
To: Person, Arthur A.; Gilbert Sebenste
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin
Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Well, I haven't changed anything in the past few days, but my lags dropped back
to pretty much pre-February 10 levels starting with today's (20190419) 12 UTC
run. I know Unidata switched to a 20 way split feed around that same time... I
am still running a 10-way split. I didn't change anything between today's 06
UTC run and the 12 UTC run, but the lags dropped considerably, and look like
they used to.
I wonder if some bad piece of hardware got swapped out somewhere, or if some
change was made internally at NCEP that fixed whatever was going on. Or,
perhaps the Unidata switch to a 20 way feed somehow reduced a load on a router
somewhere and data is getting through more easily?
Strange..
Pete
<conduit_lag_idd.aos.wisc.edu_20180420_0409UTC.gif>
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Thursday, April 18, 2019 2:20 PM
To: Gilbert Sebenste; Pete Pokrandt
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin
Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
All --
I switched our test system, iddrs2a, feeding from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454702857&sdata=7O1VFDSKzDyKRjTyfD1lKGaJYh9F3zFdwpGP6duOmJs%3D&reserved=0>
from a 2-way split to a 20-way split yesterday, and the results are dramatic:
<pastedImage.png>
Although conduit feed performance at other sites improved a little last night
with the MRMS feed failure, it doesn't explain this improvement entirely. This
leads me to ponder the causes of such an improvement:
1) The network path does not appear to be bandwidth constrained, otherwise
there would be no improvement no matter how many pipes were used;
2) The problem, therefore, would appear to be packet oriented, either with path
packet saturation, or packet shaping.
I'm not a networking expert, so maybe I'm missing another possibility here, but
I'm curious whether packet shaping could account for some of the throughput
issues. I've also been having trouble getting timely delivery of our Unidata
IDD satellite feed, and discovered that switching that to a 10-way split feed
(from a 2-way split) has reduced the latencies from 2000-3000 seconds down to
less than 300 seconds. Interestingly, the peak satellite feed latencies (see
below) occur at the same time as the peak conduit latencies, but this path is
unrelated to NCEP (as far as I know). Is it possible that Internet 2 could be
packet-shaping their traffic and that this could be part of the cause for the
packet latencies we're seeing?
Art
<pastedImage.png>
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Gilbert Sebenste
<gilbert@xxxxxxxxxxxxxxxx<mailto:gilbert@xxxxxxxxxxxxxxxx>>
Sent: Thursday, April 18, 2019 2:29 AM
To: Pete Pokrandt
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin
Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
FYI: all evening and into the overnight, MRMS data has been missing, QC BR has
been town for the last 40 minutes, but smaller products are coming through
somewhat more reliably as of 6Z. CONDUIT was still substantially delayed around
4Z with the GFS.
Gilbert
On Apr 16, 2019, at 5:43 PM, Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Here's a few traceroutes from just now - from
idd-agg.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fidd-agg.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454712845&sdata=5Gj1PunDqtECOgQG8dcLTB1zh3Bb%2BRJ73ezLYBIK5Ls%3D&reserved=0>
to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454712845&sdata=2WxFtUpOqeEPjFsBTlljGQOv3VXLVyYo8rpftQET9TU%3D&reserved=0>.
The lags are up and running around 600-800 seconds right now. I'm not
including all of the * * * lines from after 144.90.76.65 which is presumably
behind a firewall.
2209 UTC Tuesday Apr 16
traceroute -p 388
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454722842&sdata=G%2BCJ8J%2FgavrXJV0ZRp%2Fa7d3L3%2Fvhs%2F3SQ7y1OKsKRJY%3D&reserved=0>
traceroute to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454722842&sdata=G%2BCJ8J%2FgavrXJV0ZRp%2Fa7d3L3%2Fvhs%2F3SQ7y1OKsKRJY%3D&reserved=0>
(140.90.101.42), 30 hops max, 60 byte packets
1
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454732836&sdata=6Ysx0MXm%2BraXgZmTH6iXc7DBqdzrcTbVe%2BGd44gckAE%3D&reserved=0>
(144.92.130.1) 0.906 ms 0.701 ms 0.981 ms
2 128.104.4.129 (128.104.4.129) 1.700 ms 1.737 ms 1.772 ms
3
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454732836&sdata=PhKOR0ILwAUMRj06o%2B0fP7FQ4q24vQXVHMP%2FYlyH%2FaQ%3D&reserved=0>
(146.151.168.4) 1.740 ms 3.343 ms 3.336 ms
4
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454742826&sdata=98hp9rdk%2Bn8QpqOBjQUnNTH7Nby112QCtYLVkB4xSYQ%3D&reserved=0>
(146.151.166.122) 2.043 ms 2.034 ms 1.796 ms
5 144.92.254.229 (144.92.254.229) 11.530 ms 11.472 ms 11.535 ms
6
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454752830&sdata=XBW843Rv0gP6WzVFFLxmwUK%2FIyvbk1O5GRss87tfbwY%3D&reserved=0>
(162.252.70.60) 22.813 ms 22.899 ms 22.886 ms
7
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454752830&sdata=B5EN4OJqFqM9kQ1WjkkCCA6zLd5BvVUW2K8bxKnbKaI%3D&reserved=0>
(206.196.177.2) 24.248 ms 24.195 ms 24.172 ms
8
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454762816&sdata=dTNhbkypMhMcOxKNhArELiW8e5Qy7e1zk8idCHWcVQA%3D&reserved=0>
(206.196.177.189) 24.244 ms 24.196 ms 24.183 ms
9
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454762816&sdata=uWJm%2Bz%2FjOQCAK6CBYhNpECkpHswK3JdAaGBBNFNzkTQ%3D&reserved=0>
(137.75.68.4) 24.937 ms 24.884 ms 24.878 ms
10 140.208.63.30 (140.208.63.30) 134.030 ms 126.195 ms 126.305 ms
11 140.90.76.65 (140.90.76.65) 106.810 ms 104.553 ms 104.603 ms
2230 UTC Tuesday Apr 16
traceroute -p 388
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454772812&sdata=a6LTr4tSqvP3RSjBikxWOXJrSWwYVT3dbbOHti5oPHs%3D&reserved=0>
traceroute to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454772812&sdata=a6LTr4tSqvP3RSjBikxWOXJrSWwYVT3dbbOHti5oPHs%3D&reserved=0>
(140.90.101.42), 30 hops max, 60 byte packets
1
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454782808&sdata=m%2B63O5PCMkCFvOeWCAOKehD4l4yVBXngAzdXRhuf4Kg%3D&reserved=0>
(144.92.130.1) 1.391 ms 1.154 ms 5.902 ms
2 128.104.4.129 (128.104.4.129) 6.917 ms 6.895 ms 2.004 ms
3
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454782808&sdata=%2BVUzCjATI8YhwVdYzrj6MyCkuvztRsvDHrz6LM%2BoLgM%3D&reserved=0>
(146.151.168.4) 3.158 ms 3.293 ms 3.251 ms
4
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454792800&sdata=aQKlKqvCIM4bpFbhzghAOSUjSmHVm8fv59yord7tsVQ%3D&reserved=0>
(146.151.166.122) 6.185 ms 2.278 ms 2.425 ms
5 144.92.254.229 (144.92.254.229) 6.909 ms 13.255 ms 6.863 ms
6
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454792800&sdata=hAJ5CJxbmIt7M7jDwtwvFvXIUE8BAhGcCkP8QK%2B7hlM%3D&reserved=0>
(162.252.70.60) 23.328 ms 23.244 ms 28.845 ms
7
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454802794&sdata=FEywp46EAsiSmiq8qHBdzUUhvasZtnFhz%2BUvcZOmUrA%3D&reserved=0>
(206.196.177.2) 30.308 ms 24.575 ms 24.536 ms
8
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454802794&sdata=6wHUV4dcDg9Va2TNGzIiK4oVJKnWgFxWRuan0K2fbMg%3D&reserved=0>
(206.196.177.189) 29.594 ms 24.624 ms 24.618 ms
9
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454812786&sdata=6WRWX%2BfbUdmvdB9ivtYp9ok2d830pYu%2BvOXecHfGGYw%3D&reserved=0>
(137.75.68.4) 24.581 ms 30.164 ms 24.627 ms
10 140.208.63.30 (140.208.63.30) 25.677 ms 25.767 ms 29.543 ms
11 140.90.76.65 (140.90.76.65) 105.812 ms 105.345 ms 108.857
2232 UTC Tuesday Apr 16
traceroute -p 388
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454812786&sdata=V7N1McVUOmsVnCPX%2F%2BQN6s9gNeLPNRgH9xK5G2r57Fo%3D&reserved=0>
traceroute to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454822779&sdata=QGD4UC4JHHK%2F3IyGg6bKvBsa3XbHgfw06qM3H2Pn0Ok%3D&reserved=0>
(140.90.101.42), 30 hops max, 60 byte packets
1
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454822779&sdata=2rLQZx1Dnn3UuuZZFWw2%2BTyOlrJRuMRHQPYv8ZzPrJw%3D&reserved=0>
(144.92.130.1) 1.266 ms 1.070 ms 1.226 ms
2 128.104.4.129 (128.104.4.129) 1.915 ms 2.652 ms 2.775 ms
3
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454832780&sdata=jGi0Sho9vYNXbl5xxLbDQyp4bMr4gZmbCH9jOGGb%2F4M%3D&reserved=0>
(146.151.168.4) 2.353 ms 2.129 ms 2.314 ms
4
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454832780&sdata=wUPGJQtEmHuDk9gbvRQr0uZFYbLVz9bJ6l%2FB3PjKqsI%3D&reserved=0>
(146.151.166.122) 2.114 ms 2.111 ms 2.163 ms
5 144.92.254.229 (144.92.254.229) 6.891 ms 6.838 ms 6.840 ms
6
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454842768&sdata=eHEg2BetPcqJHznAzfxGi5%2BuaKpljsQALQ0GeNfIAjs%3D&reserved=0>
(162.252.70.60) 23.336 ms 23.283 ms 23.364 ms
7
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454842768&sdata=rTX1dyklpG%2BEqoVk%2Bja4a886F7bzYLEQmUX3v0j8q7w%3D&reserved=0>
(206.196.177.2) 24.493 ms 24.136 ms 24.152 ms
8
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454852762&sdata=mCTnu8CMvtGDJZi%2FUgP2etdHFhwqlGohzrX6p05rEpc%3D&reserved=0>
(206.196.177.189) 24.161 ms 24.173 ms 24.176 ms
9
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454852762&sdata=CTdK%2B49bEPWGRRM6JbYSowIaq5PhxCHwZ4CNhyn4v0s%3D&reserved=0>
(137.75.68.4) 24.165 ms 24.331 ms 24.201 ms
10 140.208.63.30 (140.208.63.30) 25.361 ms 25.427 ms 25.240 ms
11 140.90.76.65 (140.90.76.65) 113.194 ms 115.553 ms 115.543 ms
2234 UTC Tuesday Apr 16
traceroute -p 388
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454862755&sdata=L4bxETo4QjG%2BLCHy%2B8n7lusSsryLc9WIzv%2F12LiNNGc%3D&reserved=0>
traceroute to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454862755&sdata=L4bxETo4QjG%2BLCHy%2B8n7lusSsryLc9WIzv%2F12LiNNGc%3D&reserved=0>
(140.90.101.42), 30 hops max, 60 byte packets
1
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454872749&sdata=qNw6ZUFNiJwWZXJ3d4WmrGWmcuHVqdPFrhOr440vUsw%3D&reserved=0>
(144.92.130.1) 0.901 ms 0.663 ms 0.826 ms
2 128.104.4.129 (128.104.4.129) 1.645 ms 1.948 ms 1.729 ms
3
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454872749&sdata=ptOFBaW7UAMas8wIIhePt%2BdUx0cSMmHrbYv1MGhBkVQ%3D&reserved=0>
(146.151.168.4) 1.804 ms 1.788 ms 1.849 ms
4
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454882745&sdata=NhjwBGepK3kPSGs4oC%2F37g7CUaaZOIcgJM%2BLHP6cVE8%3D&reserved=0>
(146.151.166.122) 2.011 ms 2.004 ms 1.982 ms
5 144.92.254.229 (144.92.254.229) 6.241 ms 6.240 ms 6.220 ms
6
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454882745&sdata=o7ORk6FJx4j8plKwgLd1nbxaKa9nj%2F3nInfidlBw%2FY0%3D&reserved=0>
(162.252.70.60) 23.042 ms 23.072 ms 23.033 ms
7
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454892740&sdata=FO%2Fd8iKJVh590vWVhV%2BJhL0bJXhjjwtBXBiAsQGZ87Q%3D&reserved=0>
(206.196.177.2) 24.094 ms 24.398 ms 24.370 ms
8
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454892740&sdata=2IlpoVFYxdg7XZcLzYQ%2F9DiUz9gt4gRdztzITgqx5uQ%3D&reserved=0>
(206.196.177.189) 24.166 ms 24.166 ms 24.108 ms
9
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454892740&sdata=Ws4Wpz5sGLQ2UGwsedmfimSBsbwqSvrZYfZZUf4PziM%3D&reserved=0>
(137.75.68.4) 24.056 ms 24.306 ms 24.215 ms
10 140.208.63.30 (140.208.63.30) 25.199 ms 25.284 ms 25.351 ms
11 140.90.76.65 (140.90.76.65) 118.314 ms 118.707 ms 118.768 ms
2236 UTC Tuesday Apr 16
traceroute -p 388
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454902731&sdata=R5hJn40If4MCw83vVt0klA4pccCdXBjhLpkLcIbq1cY%3D&reserved=0>
traceroute to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454902731&sdata=R5hJn40If4MCw83vVt0klA4pccCdXBjhLpkLcIbq1cY%3D&reserved=0>
(140.90.101.42), 30 hops max, 60 byte packets
1
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454912725&sdata=8eQXu7ULw%2F5gk%2BXeGoU%2BIvl2W%2F3XURbh1Q5k0CDpXgw%3D&reserved=0>
(144.92.130.1) 0.918 ms 0.736 ms 0.864 ms
2 128.104.4.129 (128.104.4.129) 1.517 ms 1.630 ms 1.734 ms
3
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454912725&sdata=V96r1BatIUIjTrmhB2qLe9cgPT%2BYaZQO1LJ%2FYk6LcJw%3D&reserved=0>
(146.151.168.4) 1.998 ms 3.437 ms 3.437 ms
4
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454922724&sdata=fCoFdl28uIYJyRO1dJn1lmjOX5qXQMhB7ARvPnNl7bE%3D&reserved=0>
(146.151.166.122) 1.899 ms 1.896 ms 1.867 ms
5 144.92.254.229 (144.92.254.229) 6.384 ms 6.317 ms 6.314 ms
6
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454922724&sdata=oSiexBCFuIkI%2Fdg5rtdEhYbzudASWzbLUmOeY2G5GEg%3D&reserved=0>
(162.252.70.60) 22.980 ms 23.167 ms 23.078 ms
7
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454932713&sdata=IPSKE36ibm4RE75pd%2Fys%2FAlv6ZyFkuXX2QS5MWefNrM%3D&reserved=0>
(206.196.177.2) 24.181 ms 24.152 ms 24.121 ms
8
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454932713&sdata=qDB2KoysP%2FWOAihlkQDvXjqEx1Sk1dMbIDxqAKBZk3A%3D&reserved=0>
(206.196.177.189) 48.556 ms 47.824 ms 47.799 ms
9
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454942708&sdata=WqbsyETsPb4Ipim8ZNW0s217n8V7Hcafqcfz7Kz5LdM%3D&reserved=0>
(137.75.68.4) 24.166 ms 24.154 ms 24.214 ms
10 140.208.63.30 (140.208.63.30) 25.310 ms 25.268 ms 25.401 ms
11 140.90.76.65 (140.90.76.65) 118.299 ms 123.763 ms 122.207 ms
2242 UTC
traceroute -p 388
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454942708&sdata=LE7p5R5YJT8i9FVf4sSXGztJZd2%2BHBqrobBLoITwojw%3D&reserved=0>
traceroute to
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454952704&sdata=wqhPYuCGTvsBcWyf259qn%2F1moTPHM86diGOtYcdAqs8%3D&reserved=0>
(140.90.101.42), 30 hops max, 60 byte packets
1
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454952704&sdata=PU56XV6NVwufsJNf0N2wjXVD1fr2rH5ZXnKSlvIh%2B3A%3D&reserved=0>
(144.92.130.1) 1.337 ms 1.106 ms 1.285 ms
2 128.104.4.129 (128.104.4.129) 6.039 ms 5.778 ms 1.813 ms
3
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454962696&sdata=4Rk9CcHesaQ4eRpHt%2Fd0RBUOpUwnrPfJHU6SASHWRKY%3D&reserved=0>
(146.151.168.4) 2.275 ms 2.464 ms 2.517 ms
4
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454962696&sdata=rGVM2Vdfy41hFWTOLfZjgGYr84F8msNfipn4Wl2Oa6g%3D&reserved=0>
(146.151.166.122) 2.288 ms 6.978 ms 3.506 ms
5 144.92.254.229 (144.92.254.229) 10.369 ms 6.626 ms 10.281 ms
6
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454972697&sdata=%2FBdx4Ar5rOI7taG14lZNbOmYEHDrKk%2B%2Fifr%2B82Hxhzc%3D&reserved=0>
(162.252.70.60) 23.513 ms 23.297 ms 23.295 ms
7
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454972697&sdata=mw6O713KQnEqH5fmi92Uasaqh1DoCvp%2BDWRus9QGJJI%3D&reserved=0>
(206.196.177.2) 27.938 ms 24.589 ms 28.783 ms
8
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454982687&sdata=mU1mC1uEVrX479t2mEBw6sCxG7ZzRUHwTRK4fVZOt2Q%3D&reserved=0>
(206.196.177.189) 28.796 ms 24.630 ms 28.793 ms
9
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454982687&sdata=IciZGIsNNWQl27%2F%2F2yrwy8IkBUor4PFXMWct%2FZfGfGg%3D&reserved=0>
(137.75.68.4) 24.576 ms 24.545 ms 24.587 ms
10 140.208.63.30 (140.208.63.30) 85.763 ms 85.768 ms 83.623 ms
11 140.90.76.65 (140.90.76.65) 131.912 ms 132.662 ms 132.340 ms
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Tuesday, April 16, 2019 3:04 PM
To: Gilbert Sebenste; Tyle, Kevin R
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Derek VanPelt - NOAA Affiliate; Mike Zuranski; Dustin
Sheffler - NOAA Federal;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
At UW-Madison, we had incomplete 12 UTC GFS data starting with the 177h
forecast. Lags exceeded 3600s.
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Gilbert Sebenste
<gilbert@xxxxxxxxxxxxxxxx<mailto:gilbert@xxxxxxxxxxxxxxxx>>
Sent: Tuesday, April 16, 2019 2:44 PM
To: Tyle, Kevin R
Cc: Pete Pokrandt; Dustin Sheffler - NOAA Federal; Mike Zuranski; Derek VanPelt
- NOAA Affiliate; Kevin Goebbert;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Yes, here at AllisonHouse too...we can feed from a number of sites, and all of
them were dropping GFS, and delayed by an hour.
Gilbert
On Apr 16, 2019, at 2:39 PM, Tyle, Kevin R
<ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>> wrote:
For what it's worth, our 12Z GFS data ingest was quite bad today ... many lost
products beyond F168 (we feed from UWisc-MSN primary and PSU secondary) .
_____________________________________________
Kevin Tyle, M.S.; Manager of Departmental Computing
Dept. of Atmospheric & Environmental Sciences
University at Albany
Earth Science 235, 1400 Washington Avenue
Albany, NY 12222
Email: ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>
Phone: 518-442-4578
_____________________________________________
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Tuesday, April 16, 2019 12:00 PM
To: Dustin Sheffler - NOAA Federal; Mike Zuranski
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
Derek VanPelt - NOAA Affiliate; _NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
All,
Just keeping this in the foreground.
CONDUIT lags continue to be very large compared to what they were previous to
whatever changed back in February. Prior to that, we rarely saw lags more than
~300s. Now they are routinely 1500-2000s at UW-Madison and Penn State, and
over 3000s at Unidata - they appear to be on the edge of losing data. This does
not bode well with all of the IDP applications failing back over to CP today..
Can we send you some traceroutes and you back to us to maybe try to isolate
where in the network this is happening? It feels like congestion or a bad route
somewhere - the lags seem to be worse on weekdays than weekends if that helps
at all.
Here are the current CONDUIT lags to UW-Madison, Penn State and Unidata.
<iddstats_CONDUIT_idd_aos_wisc_edu_ending_20190416_1600UTC.gif>
<iddstats_CONDUIT_idd_meteo_psu_edu_ending_20190416_1600UTC.gif>
<iddstats_CONDUIT_conduit_unidata_ucar_edu_ending_20190416_1600UTC.gif>
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Dustin Sheffler - NOAA Federal
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>
Sent: Tuesday, April 9, 2019 12:52 PM
To: Mike Zuranski
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow; Derek VanPelt - NOAA Affiliate;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Hi Mike,
Thanks for the feedback on NOMADS. We recently found a slowness issue when
NOMADS is running out of our Boulder data center that is being worked on by our
teams now that NOMADS is live out of the College Park data center. It's hard
sometimes to quantify whether slowness issues that are only being reported by a
handful of users is a result of something wrong in our data center, a bad
network path between a customer (possibly just from a particular region of the
country) and our data center, a local issue on the customers' end, or any other
reason that might cause slowness.
Conduit is only ever run from our College Park data center. It's slowness is
not tied into the Boulder NOMADS issue, but it does seem to be at least a
little bit tied to which of our data centers NOMADS is running out of. When
NOMADS is in Boulder along with the majority of our other NCEP applications,
the strain on the College Park data center is minimal and Conduit appears to be
running better as a result. When NOMADS runs in College Park (as it has since
late yesterday) there is more strain on the data center and Conduit appears
(based on provided user graphs) to run a bit worse around peak model times as a
result. These are just my observations and we are still investigating what may
have changed that caused the Conduit latencies to appear in the first place so
that we can resolve this potential constraint.
-Dustin
On Tue, Apr 9, 2019 at 4:28 PM Mike Zuranski
<zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>> wrote:
Hi everyone,
I've avoided jumping into this conversation since I don't deal much with
Conduit these days, but Derek just mentioned something that I do have some
applicable feedback on...
> Two items happened last night. 1. NOMADS was moved back to College Park...
We get nearly all of our model data via NOMADS. When it switched to Boulder
last week we saw a significant drop in download speeds, down to a couple
hundred KB/s or slower. Starting last night, we're back to speeds on the order
of MB/s or tens of MB/s. Switching back to College Park seems to confirm for
me something about routing from Boulder was responsible. But again this was
all on NOMADS, not sure if it's related to happenings on Conduit.
When I noticed this last week I sent an email to
sdm@xxxxxxxx<mailto:sdm@xxxxxxxx> including a traceroute taken at the time, let
me know if you'd like me to find that and pass it along here or someplace else.
-Mike
======================
Mike Zuranski
Meteorology Support Analyst
College of DuPage - Nexlab
Weather.cod.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2FWeather.cod.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454992680&sdata=r43wubdDPXGxEglRUrWLGbze3sB7wLmA3zH5rcMT5hs%3D&reserved=0>
======================
On Tue, Apr 9, 2019 at 10:51 AM Person, Arthur A.
<aap1@xxxxxxx<mailto:aap1@xxxxxxx>> wrote:
Derek,
Do we know what change might have been made around February 10th when the
CONDUIT problems first started happening? Prior to that time, the CONDUIT feed
had been very crisp for a long period of time.
Thanks... Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: Derek VanPelt - NOAA Affiliate
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Tuesday, April 9, 2019 11:34 AM
To: Holly Uhlenhake - NOAA Federal
Cc: Carissa Klemmer - NOAA Federal; Person, Arthur A.; Pete Pokrandt;
_NCEP.List.pmb-dataflow;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Hi all,
Two items happened last night.
1. NOMADS was moved back to College Park, which means there was a lot more
traffic going out which will have effect on the Conduit latencies. We do not
have a full load from the COllege Park Servers as many of the other
applications are still running from Boulder, but NOMADS will certainly increase
overall load.
2. As Holly said, there were further issues delaying and changing the timing
of the model output yesterday afternoon/evening. I will be watching from our
end, and monitoring the Unidata 48 hour graph (thank you for the link)
throughout the day,
Please let us know if you have questions or more information to help us analyse
what you are seeing.
Thank you,
Derek
On Tue, Apr 9, 2019 at 6:50 AM Holly Uhlenhake - NOAA Federal
<holly.uhlenhake@xxxxxxxx<mailto:holly.uhlenhake@xxxxxxxx>> wrote:
Hi Pete,
We also had an issue on the supercomputer yesterday where several models going
to conduit would have been stacked on top of each other instead of coming out
in a more spread out fashion. It's not inconceivable that conduit could have
backed up working through the abnormally large glut of grib messages. Are
things better this morning at all?
Thanks,
Holly
On Tue, Apr 9, 2019 at 12:37 AM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Something changed starting with today's 18 UTC model cycle, and our lags shot
up to over 3600 seconds, where we started losing data. They are growing again
now with the 00 UTC cycle as well. PSU and Unidata CONDUIT stats show similar
abnormally large lags.
FYI.
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Friday, April 5, 2019 2:10 PM
To: Carissa Klemmer - NOAA Federal
Cc: Pete Pokrandt; Derek VanPelt - NOAA Affiliate; Gilbert Sebenste;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: Large lags on CONDUIT feed - started a week or so ago
Carissa,
The Boulder connection is definitely performing very well for CONDUIT.
Although there have been a couple of little blips (~ 120 seconds) since
yesterday, overall the performance is superb. I don't think it's quite as
clean as prior to the ~February 10th date when the D.C. connection went bad,
but it's still excellent performance. Here's our graph now with a single
connection (no splits):
<pastedImage.png>
My next question is: Will CONDUIT stay pointing at Boulder until D.C. is
fixed, or might you be required to switch back to D.C. at some point before
that?
Thanks... Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: Carissa Klemmer - NOAA Federal
<carissa.l.klemmer@xxxxxxxx<mailto:carissa.l.klemmer@xxxxxxxx>>
Sent: Thursday, April 4, 2019 6:22 PM
To: Person, Arthur A.
Cc: Pete Pokrandt; Derek VanPelt - NOAA Affiliate; Gilbert Sebenste;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: Large lags on CONDUIT feed - started a week or so ago
Catching up here.
Derek,
Do we have traceroutes from all users? Does anything in VCenter show any system
resource constraints?
On Thursday, April 4, 2019, Person, Arthur A.
<aap1@xxxxxxx<mailto:aap1@xxxxxxx>> wrote:
Yeh, definitely looks "blipier" starting around 7Z this morning, but nothing
like it was before. And all last night was clean. Here's our graph with a
2-way split, a huge improvement over what it was before the switch to Boulder:
Agree with Pete that this morning's data probably isn't a good test since there
were other factors. Since this seems so much better, I'm going to try
switching to no split as an experiment and see how it holds up.
Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Thursday, April 4, 2019 1:51 PM
To: Derek VanPelt - NOAA Affiliate
Cc: Person, Arthur A.; Gilbert Sebenste; Anne Myckow - NOAA Affiliate;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] [conduit] Large lags on CONDUIT feed -
started a week or so ago
Ah, so perhaps not a good test.. I'll set it back to a 5-way split and see how
it looks tomorrow.
Thanks for the info,
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Derek VanPelt - NOAA Affiliate
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Thursday, April 4, 2019 12:38 PM
To: Pete Pokrandt
Cc: Person, Arthur A.; Gilbert Sebenste; Anne Myckow - NOAA Affiliate;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] [conduit] Large lags on CONDUIT feed -
started a week or so ago
HI Pete -- we did have a separate issu hit the CONDUIT feed today. We should
be recovering now, but the backlog was sizeable. If these numbers are not back
to the baseline in the next hour or so please let us know. We are also
watching our queues and they are decreasing, but not as quickly as we had hoped.
Thank you,
Derek
On Thu, Apr 4, 2019 at 1:26 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:
FYI - there is still a much larger lag for the 12 UTC run with a 5-way split
compared to a 10-way split. It's better since everything else failed over to
Boulder, but I'd venture to guess that's not the root of the problem.
Prior to whatever is going on to cause this, I don'r recall ever seeing lags
this large with a 5-way split. It looked much more like the left hand side of
this graph, with small increases in lag with each 6 hourly model run cycle, but
more like 100 seconds vs the ~900 that I got this morning.
FYI I am going to change back to a 10 way split for now.
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, April 3, 2019 4:57 PM
To: Person, Arthur A.; Gilbert Sebenste; Anne Myckow - NOAA Affiliate
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Sorry, was out this morning and just had a chance to look into this. I concur
with Art and Gilbert that things appear to have gotten better starting with the
failover of everything else to Boulder yesterday. I will also reconfigure to go
back to a 5-way split (as opposed to the 10-way split that I've been using
since this issue began) and keep an eye on tomorrow's 12 UTC model run cycle -
if the lags go up, it usually happens worst during that cycle, shortly before
18 UTC each day.
I'll report back tomorrow how it looks, or you can see at
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243454992680&sdata=uQv7092rwbW5DzpsoAql5jIKZpizt%2FaAU0lhmPydZuA%3D&reserved=0>
Thanks,
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Wednesday, April 3, 2019 4:04 PM
To: Gilbert Sebenste; Anne Myckow - NOAA Affiliate
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Anne,
I'll hop back in the loop here... for some reason these replies started going
into my junk file (bleh). Anyway, I agree with Gilbert's assessment. Things
turned real clean around 12Z yesterday, looking at the graphs. I usually look
at
flood.atmos.uiuc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fflood.atmos.uiuc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455002674&sdata=YaEB0aMlxVhxfuKvUCvYkGXifqWzqGzWgpRBH1lC8M4%3D&reserved=0>
when there are problem as their connection always seems to be the cleanest.
If there are even small blips or ups and downs in their latencies, that usually
means there's a network aberration somewhere that usually amplifies into
hundreds or thousands of seconds at our site and elsewhere. Looking at their
graph now, you can see the blipiness up until 12Z yesterday, and then it's flat
(except for the one spike around 16Z today which I would ignore):
<pastedImage.png>
Our direct-connected site, which is using a 10-way split right now, also shows
a return to calmness in the latencies:
Prior to the recent latency jump, I did not use split requests and the
reception had been stellar for quite some time. It's my suspicion that this is
a networking congestion issue somewhere close to the source since it seems to
affect all downstream sites. For that reason, I don't think solving this
problem should necessarily involve upgrading your server software, but rather
identifying what's jamming up the network near D.C., and testing this by
switching to Boulder was an excellent idea. I will now try switching our
system to a two-way split to see if this performance holds up with fewer pipes.
Thanks for your help and I'll let you know what I find out.
Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Gilbert Sebenste
<gilbert@xxxxxxxxxxxxxxxx<mailto:gilbert@xxxxxxxxxxxxxxxx>>
Sent: Wednesday, April 3, 2019 4:07 PM
To: Anne Myckow - NOAA Affiliate
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed -
started a week or so ago
Hello Anne,
I'll jump in here as well. Consider the CONDUIT delays at UNIDATA:
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455012661&sdata=0pcspf66tq1DU8Okl8KAVkXox823frOpw7gdEp64n9g%3D&reserved=0>
And now, Wisconsin:
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455012661&sdata=rSC4soW2D2sAyAfly0CP9%2B7xYgx0lwrGds%2BYHDrbwCY%3D&reserved=0>
And finally, the University of Washington:
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+freshair1.atmos.washington.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bfreshair1.atmos.washington.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455022658&sdata=9KyNDUV5OztfPdFmhmmJ5rxOjBnEaE0sYY7aJpYAPt4%3D&reserved=0>
All three of whom have direct feeds from you. Flipping over to Boulder
definitely caused a major improvement. There was still a brief spike in delay,
but much shorter and minimal
compared to what it was.
Gilbert
On Wed, Apr 3, 2019 at 10:03 AM Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>> wrote:
Hi Pete,
As of yesterday we failed almost all of our applications to our site in Boulder
(meaning away from CONDUIT). Have you noticed an improvement in your speeds
since yesterday afternoon? If so this will give us a clue that maybe there's
something interfering on our side that isn't specifically CONDUIT, but another
app that might be causing congestion. (And if it's the same then that's a clue
in the other direction.)
Thanks,
Anne
On Mon, Apr 1, 2019 at 3:24 PM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
The lag here at UW-Madison was up to 1200 seconds today, and that's with a
10-way split feed. Whatever is causing the issue has definitely not been
resolved, and historically is worse during the work week than on the weekends.
If that helps at all.
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Anne Myckow - NOAA Affiliate
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Sent: Thursday, March 28, 2019 4:28 PM
To: Person, Arthur A.
Cc: Carissa Klemmer - NOAA Federal; Pete Pokrandt; _NCEP.List.pmb-dataflow;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - started a
week or so ago
Hello Art,
We will not be upgrading to version 6.13 on these systems as they are not
robust enough to support the local logging inherent in the new version.
I will check in with my team on if there are any further actions we can take to
try and troubleshoot this issue, but I fear we may be at the limit of our
ability to make this better.
I'll let you know tomorrow where we stand. Thanks.
Anne
On Mon, Mar 25, 2019 at 3:00 PM Person, Arthur A.
<aap1@xxxxxxx<mailto:aap1@xxxxxxx>> wrote:
Carissa,
Can you report any status on this inquiry?
Thanks... Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: Carissa Klemmer - NOAA Federal
<carissa.l.klemmer@xxxxxxxx<mailto:carissa.l.klemmer@xxxxxxxx>>
Sent: Tuesday, March 12, 2019 8:30 AM
To: Pete Pokrandt
Cc: Person, Arthur A.;
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>;
_NCEP.List.pmb-dataflow
Subject: Re: Large lags on CONDUIT feed - started a week or so ago
Hi Everyone
I've added the Dataflow team email to the thread. I haven't heard that any
changes were made or that any issues were found. But the team can look today
and see if we have any signifiers of overall slowness with anything.
Dataflow, try taking a look at the new Citrix or VM troubleshooting tools if
there are any abnormal signatures that may explain this.
On Monday, March 11, 2019, Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Art,
I don't know if NCEP ever figured anything out, but I've been able to keep my
latencies reasonable (300-600s max, mostly during the 12 UTC model suite) by
splitting my CONDUIT request 10 ways, instead of the 5 that I had been doing,
or in a single request. Maybe give that a try and see if it helps at all.
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Monday, March 11, 2019 3:45 PM
To: Holly Uhlenhake - NOAA Federal; Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Holly,
Was there any resolution to this on the NCEP end? I'm still seeing terrible
delays (1000-4000 seconds) receiving data from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455022658&sdata=wUoS6wauaenI36puEjuux5rkxRN0yLCR%2BM7M%2Fi7%2F%2FIM%3D&reserved=0>.
It would be helpful to know if things are resolved at NCEP's end so I know
whether to look further down the line.
Thanks... Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Holly Uhlenhake - NOAA Federal
<holly.uhlenhake@xxxxxxxx<mailto:holly.uhlenhake@xxxxxxxx>>
Sent: Thursday, February 21, 2019 12:05 PM
To: Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Hi Pete,
We'll take a look and see if we can figure out what might be going on. We
haven't done anything to try and address this yet, but based on your analysis
I'm suspicious that it might be tied to a resource constraint on the VM or the
blade it resides on.
Thanks,
Holly Uhlenhake
Acting Dataflow Team Lead
On Thu, Feb 21, 2019 at 11:32 AM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Just FYI, data is flowing, but the large lags continue.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455032648&sdata=iWw%2FgchlgszOuInG3eBEoG8zyye5hCJhnRHOSPN9hME%3D&reserved=0>
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455032648&sdata=InVWWd9DMDCxji41CkPzU2XlIt4q%2FCY6Qf3IFuuZnOI%3D&reserved=0>
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 12:07 PM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Data is flowing again - picked up somewhere in the GEFS. Maybe CONDUIT server
was restarted, or ldm on it? Lags are large (3000s+) but dropping slowly
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 11:56 AM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Just a quick follow-up - we started falling far enough behind (3600+ sec) that
we are losing data. We got short files starting at 174h into the GFS run, and
only got (incomplete) data through 207h.
We have now not received any data on CONDUIT since 11:27 AM CST (1727 UTC)
today (Wed Feb 20)
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 11:28 AM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: [conduit] Large lags on CONDUIT feed - started a week or so ago
Carissa,
We have been feeding CONDUIT using a 5 way split feed direct from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455042642&sdata=lt58SivNSZoOUnNrgI%2BoNPPgfWS1j8ardKt4BEjkUzg%3D&reserved=0>,
and it had been really good for some time, lags 30-60 seconds or less.
However, the past week or so, we've been seeing some very large lags during
each 6 hour model suite - Unidata is also seeing these - they are also feeding
direct from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455042642&sdata=lt58SivNSZoOUnNrgI%2BoNPPgfWS1j8ardKt4BEjkUzg%3D&reserved=0>.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455052642&sdata=vFA%2FVCkRTdRVXhiv%2FQlWrG%2Fw19mEo17biZiXRInW6Hw%3D&reserved=0>
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455052642&sdata=J%2FIfCxt%2BcbF8VKgHEt%2FNSJ%2Fe22pHtcx8M7hHIUmPgcM%3D&reserved=0>
Any idea what's going on, or how we can find out?
Thanks!
Pete
--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web. Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.
conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455062635&sdata=2aE9Vk1cmNSYsBigi59GrKsroSPFYgvr0aOgSQ%2Bhr2s%3D&reserved=0>
--
Carissa Klemmer
NCEP Central Operations
IDSB Branch Chief
301-683-3835
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455062635&sdata=zg3LjPDi8Yl9dZqLJ2umjO7vsJ%2FMdeYBebZQIRHdr6c%3D&reserved=0>
--
Anne Myckow
Lead Dataflow Analyst
NOAA/NCEP/NCO
301-683-3825
--
Anne Myckow
Lead Dataflow Analyst
NOAA/NCEP/NCO
301-683-3825
_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web. Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.
conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455072631&sdata=f1OV%2F79xu4dqoxK634UnocT2kJas42xgJPRbZeWZbLw%3D&reserved=0>
--
----
Gilbert Sebenste
Consulting Meteorologist
AllisonHouse, LLC
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455072631&sdata=e8WiKdIXtNmXzPRtK%2FFVhqOtRYP5r6EwArwX1fYJzuw%3D&reserved=0>
--
Derek Van Pelt
DataFlow Analyst
NOAA/NCEP/NCO
--
Carissa Klemmer
NCEP Central Operations
IDSB Branch Chief
301-683-3835
_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web. Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.
conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455082621&sdata=qVg0ghNeIsIzwJEQAve4STTaYx9AVFYsTthCuQMsMYo%3D&reserved=0>
--
Derek Van Pelt
DataFlow Analyst
NOAA/NCEP/NCO
--
Misspelled straight from Derek's phone.
_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web. Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.
conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455082621&sdata=qVg0ghNeIsIzwJEQAve4STTaYx9AVFYsTthCuQMsMYo%3D&reserved=0>
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455092618&sdata=xK1d2gbmq9FxCr7ly63aOlw4QK%2BfvFcNis0Lbn16CbQ%3D&reserved=0>
--
Dustin Sheffler
NCEP Central Operations - Dataflow
5830 University Research Court, Rm 1030
College Park, Maryland 20740
Office: (301) 683-3827<tel:%28301%29%20683-1400>
_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web. Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.
conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455092618&sdata=YxRwkhNHFg3jy%2BfBfW0uBHhHDoCkTCJX5nhGQUskRZU%3D&reserved=0>
--
Derek Van Pelt
DataFlow Analyst
NOAA/NCEP/NCO
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455102611&sdata=ji014tVxqA7OxrOJdMgi1KCqsoSjy0dV55XRw%2B7JYlw%3D&reserved=0>
--
Anne Myckow
Lead Dataflow Analyst
NOAA/NCEP/NCO
301-683-3825
--
Anne Myckow
Lead Dataflow Analyst
NOAA/NCEP/NCO
301-683-3825
--
Anne Myckow
Lead Dataflow Analyst
NOAA/NCEP/NCO
301-683-3825
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455102611&sdata=ji014tVxqA7OxrOJdMgi1KCqsoSjy0dV55XRw%2B7JYlw%3D&reserved=0>
--
Dustin Sheffler
NCEP Central Operations - Dataflow
5830 University Research Court, Rm 1030
College Park, Maryland 20740
Office: (301) 683-3827<tel:%28301%29%20683-1400>
--
Anne Myckow
Lead Dataflow Analyst
NOAA/NCEP/NCO
301-683-3825
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455102611&sdata=ji014tVxqA7OxrOJdMgi1KCqsoSjy0dV55XRw%2B7JYlw%3D&reserved=0>
--
Tony Salemi - IT Specialist
NCEP Central Operations
Dataflow Analyst
Contracting Officer Technical Representative
5830 University Research Ct. Suite 1028
College Park, MD 20740
301-683-3908
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455112607&sdata=We2puX3PTx8xGOIy6m3xu39vBHJRDk9WB33432qO7ZY%3D&reserved=0>
--
Dustin Sheffler
NCEP Central Operations - Dataflow
<tel:%28301%29%20683-1400>5830 University Research Court, Rm 1030
College Park, Maryland 20740
Office: (301) 683-3827<tel:%28301%29%20683-1400>
<tel:%28301%29%20683-1400>
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7C62b58eeff98148d04e0108d7308a6c44%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637031243455112607&sdata=We2puX3PTx8xGOIy6m3xu39vBHJRDk9WB33432qO7ZY%3D&reserved=0>
--
Derek Van Pelt
WCCIS Data Management
NOAA Affiliate
301.683.3832
derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>
--
Derek Van Pelt
WCCIS Data Management
NOAA Affiliate
301.683.3832
derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>