With the conduit request split into 10 threads for this morning's 12Z data, I
think there was some improvement, though not conclusive. Our delay peaked at
about 2200 seconds which was much better than yesterday, but about the same as
on the 10th. However, I think the delays today at other sites were worse than
the recent average, which is why I think it was an improvement .
A traceroute shows our connection going through Chicago
(rtsw.chic.net.internet2.edu) with an RTT of about 16 ms and then through
Ashburn (rtsw.ashb.net.internet2.edu) with an RTT of about 32 ms, and finally
into the Max Gigapop (clpk-core.maxgigapop.net) with an RTT of about 34 ms.
Anyone follow a similar route to this that you could share some times on?
Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx, phone: 814-863-1563<callto:814-863-1563>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx <conduit-bounces@xxxxxxxxxxxxxxxx> on
behalf of Person, Arthur A. <aap1@xxxxxxx>
Sent: Tuesday, March 12, 2019 10:22 AM
To: Carissa Klemmer - NOAA Federal; Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx; _NCEP.List.pmb-dataflow;
support-conduit@xxxxxxxxxxxxxxxx
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Thanks Carissa.
Pete, I was running a single request until recently, but split it into 5 a
couple weeks ago with the results I reported below. I'll try 10 and see if
that helps further. It's been my experience that when the latencies go bad,
they are usually worse getting to us than other sites, not sure why. Perhaps
the route, or proximity to I2 backbone...? Things had been great for months
until approximately February 10th, give or take a few days. Since then, the
latencies have been terrible.
Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx, phone: 814-863-1563<callto:814-863-1563>
________________________________
From: Carissa Klemmer - NOAA Federal <carissa.l.klemmer@xxxxxxxx>
Sent: Tuesday, March 12, 2019 8:30 AM
To: Pete Pokrandt
Cc: Person, Arthur A.; conduit@xxxxxxxxxxxxxxxx;
support-conduit@xxxxxxxxxxxxxxxx; _NCEP.List.pmb-dataflow
Subject: Re: Large lags on CONDUIT feed - started a week or so ago
Hi Everyone
I’ve added the Dataflow team email to the thread. I haven’t heard that any
changes were made or that any issues were found. But the team can look today
and see if we have any signifiers of overall slowness with anything.
Dataflow, try taking a look at the new Citrix or VM troubleshooting tools if
there are any abnormal signatures that may explain this.
On Monday, March 11, 2019, Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Art,
I don't know if NCEP ever figured anything out, but I've been able to keep my
latencies reasonable (300-600s max, mostly during the 12 UTC model suite) by
splitting my CONDUIT request 10 ways, instead of the 5 that I had been doing,
or in a single request. Maybe give that a try and see if it helps at all.
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973475990085&sdata=iqlU451%2FRTw6zwdHVH9LwZciKIcKZZu9%2F7eT1lQkKWs%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Monday, March 11, 2019 3:45 PM
To: Holly Uhlenhake - NOAA Federal; Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Holly,
Was there any resolution to this on the NCEP end? I'm still seeing terrible
delays (1000-4000 seconds) receiving data from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476000094&sdata=MYN3gmr%2FBp87iCTQDO1tHbt6k6Fxo6nVlauFrAIJI0I%3D&reserved=0>.
It would be helpful to know if things are resolved at NCEP's end so I know
whether to look further down the line.
Thanks... Art
Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email: aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:
814-863-1563<callto:814-863-1563>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Holly Uhlenhake - NOAA Federal
<holly.uhlenhake@xxxxxxxx<mailto:holly.uhlenhake@xxxxxxxx>>
Sent: Thursday, February 21, 2019 12:05 PM
To: Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Hi Pete,
We'll take a look and see if we can figure out what might be going on. We
haven't done anything to try and address this yet, but based on your analysis
I'm suspicious that it might be tied to a resource constraint on the VM or the
blade it resides on.
Thanks,
Holly Uhlenhake
Acting Dataflow Team Lead
On Thu, Feb 21, 2019 at 11:32 AM Pete Pokrandt
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:
Just FYI, data is flowing, but the large lags continue.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476000094&sdata=M0iSNN%2Ft2GEznLKLNJcfzqIZZzx3%2BQZ4DKHITeU4B3A%3D&reserved=0>
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476010099&sdata=tO4CJvQBysJZQQAwMQt6JUo6cr9v0XUbEg4ZsOqKjlw%3D&reserved=0>
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476010099&sdata=dUn0W4E2yN7hV6OW%2BGhIGTqg8EDid7Bbb%2B%2F7oq6xRcA%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 12:07 PM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Data is flowing again - picked up somewhere in the GEFS. Maybe CONDUIT server
was restarted, or ldm on it? Lags are large (3000s+) but dropping slowly
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476020104&sdata=2Kk93EXin4j%2BknOeVvkdtBN3ynsfJciymK8maEWSZLQ%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 11:56 AM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
Just a quick follow-up - we started falling far enough behind (3600+ sec) that
we are losing data. We got short files starting at 174h into the GFS run, and
only got (incomplete) data through 207h.
We have now not received any data on CONDUIT since 11:27 AM CST (1727 UTC)
today (Wed Feb 20)
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476030109&sdata=F5ZuL5A1N685Y2yofwlfHpZW5eJMJnH7B2BN9DC%2Bwcg%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
________________________________
From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 11:28 AM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>;
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: [conduit] Large lags on CONDUIT feed - started a week or so ago
Carissa,
We have been feeding CONDUIT using a 5 way split feed direct from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476030109&sdata=3j7LsB3xS8oFN6srf4TT42rEI8NMGO2nqm9%2B23p1frk%3D&reserved=0>,
and it had been really good for some time, lags 30-60 seconds or less.
However, the past week or so, we've been seeing some very large lags during
each 6 hour model suite - Unidata is also seeing these - they are also feeding
direct from
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476040123&sdata=j%2F%2BcEiJLPGDoNr3WaNnRUss4JIcaqN7uWvjXHtCXhYM%3D&reserved=0>.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476040123&sdata=4n7%2B%2Fq%2F59HL4I1Y84%2FoDTOgdJy3%2FsQEZsNY%2BYjdvHP4%3D&reserved=0>
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476050128&sdata=30xKCE%2BtIPul0KcjeoVaK7jqjN4VomycbACBgm3mp14%3D&reserved=0>
Any idea what's going on, or how we can find out?
Thanks!
Pete
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.weather.com%2Ftv%2Fshows%2Fwx-geeks%2Fvideo%2Fthe-incredible-shrinking-cold-pool&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476050128&sdata=yIrycNptx1QX3fcer%2FRCkqIAhrcc5UIJXQk4sy07aro%3D&reserved=0>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086 - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>
_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web. Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.
conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7C6baff6a41c08430ed0a908d6a6f62672%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636879973476060137&sdata=0bXDY3ddc3e4hinhFbq%2BEE8HzFHKlea4nq5wPIx48uU%3D&reserved=0>
--
Carissa Klemmer
NCEP Central Operations
IDSB Branch Chief
301-683-3835