[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20190804: Re: New Client Reply - [IDD !ISJ-658184]: NIMAGE feed - big lags from idd.unidata starting ~16 UTC Monday August 5?



Hi Pete,

Thanks for sending along the feed REQUEST(s) that you are now
using.

Three things:

- I was hoping to see the feed REQUESTs that were in place before
  you switched your REQUEST for NIMAGE from idd to iddb

  Why?  It was interesting to see very low latencies in the SATELLITE
  feed and not the NIMAGE feed.  The main reasons for this are:

  - the SATELLITE feed has over twice the volume as the NIMAGE feed

  - the SATELLITE feed has about 1000 more products/hour than the
    NIMAGE feed

  Given these, it did not make sense to me, at least, that the
  latencies in the NIMAGE feed were going non-linear while those
  for the SATELLITE feed were low.  The fact that hera.aos.wisc.edu
  is only REQUESTing the SATELLITE feed might be the reason for the
  low latencies, but that still leaves the question of why the
  NIMAGE latencies were very high on occasion.  The only thing that
  might be important with the NIMAGE feed is if it were part of
  a compound feed REQUEST.

- we do not relay the FSL3, FSL4 or FSL5 feeds, so REQUESTing them
  from us will not get you any of the data that is available in them
  from NOAA

- your primary ingest for "everything else", idd-agg.aos.wisc.edu is
  not reporting real-time stats

  It is always good to be able to see what the latencies are to machines
  especially if they are then relaying to other machines.

Cheers,

Tom


On 8/5/19 3:41 PM, Pete Pokrandt wrote:
New Client Reply: NIMAGE feed - big lags from idd.unidata starting ~16 UTC 
Monday August 5?

Tom,

Sure. As of this afternoon, and per a suggestion by Gilbert Sebenste regarding 
a syntax for redundant feeds, I'm set up like this:

On Hera.aos.wisc.edu (my primary ingest for SATELLITE)

REQUEST SATELLITE "(.*)" iddc.unidata.ucar.edu
REQUEST SATELLITE ".*" iddb.unidata.ucar.edu

On idd-agg.aos.wisc.edu (my primary ingest for everything else - my cluster 
nodes idd1.aos.wisc.edu and idd2.aos.wisc.edu both pull data from idd-agg and 
hera) I've currently got

REQUEST         UNIDATA "(.*)" idd.ssec.wisc.edu
REQUEST         UNIDATA ".*" iddc.unidata.ucar.edu

REQUEST         FNEXRAD|NEXRAD3|FSL5|FSL4|FSL3  "(.*)" idd.ssec.wisc.edu
REQUEST         FNEXRAD|NEXRAD3|FSL5|FSL4|FSL3  ".*" iddc.unidata.ucar.edu

REQUEST         NIMAGE  "(.*)"  iddc.unidata.ucar.edu
REQUEST         NIMAGE  ".*"    iddb.unidata.ucar.edu

REQUEST         NOTHER  "(.*)"  idd.ssec.wisc.edu
REQUEST         NOTHER  ".*"    iddb.unidata.ucar.edu

REQUEST         NGRID   "(.*)" iddc.unidata.ucar.edu
REQUEST         NGRID   ".*" idd.ssec.wisc.edu

REQUEST         NEXRAD2 "(.*)"  iddb.unidata.ucar.edu
REQUEST         NEXRAD2 ".*"    flood.atmos.uiuc.edu

REQUEST         FNMOC   "(.*)"  iddc.unidata.ucar.edu
REQUEST         FNMOC   ".*"    iddb.unidata.ucar.edu


Then for CONDUIT (also ingesting to idd-agg.aos.wisc.edu) I have

# 10-way Split Feed
REQUEST         CONDUIT "[0]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[1]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[2]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[3]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[4]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[5]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[6]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[7]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[8]$"  conduit.ncep.noaa.gov
REQUEST         CONDUIT "[9]$"  conduit.ncep.noaa.gov

Let me know if I should change something, or you see something amiss.

Pete



<http://www.weather.com/tv/shows/wx-geeks/video/the-incredible-shrinking-cold-pool>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - address@hidden

________________________________
From: Unidata IDD Support <address@hidden>
Sent: Monday, August 5, 2019 4:24 PM
To: Pete Pokrandt <address@hidden>
Subject: [IDD #ISJ-658184]: NIMAGE feed - big lags from idd.unidata starting 
~16 UTC Monday August 5?

Hi Pete,

On 8/5/19 2:48 PM, Pete Pokrandt wrote:
New Client Reply: NIMAGE feed - big lags from idd.unidata starting ~16 UTC 
Monday August 5?

Thanks for the info, Tom. I'll move my stuff from idd to iddc for now.

Sounds good.

re:
Will there be advance warning when iddc becomes the primary idd?

Yes.

re:
Or will iddc remain in use as an alias for idd?

I believe that Mike wants to transition the name and stop using iddc.
What actually happens when the time comes is still a question in my
mind.

re:
I just don't want to get caught off guard when the switch from iddc
to primary idd happens.

Our discussions have started with the need to let the users know well
in advance of doing the switch.  The biggest problems will be the time
it takes for DNS to propagate, and getting users to restart their LDMs
in a timely manner (we may simply shut off the LDMs on the existing
idd backends to force that issue).

re:
Thanks!

No worries.

Quick request:

Can you send us the set of REQUESTs you are using in your setup?
I could go out to the various nodes to figure out which machine
is servicing you and farm the LDM log files to get the REQUESTs,
but, since you are "on the line", I figured it would be easier to
simply ask.

Cheers,

Tom

______________________________
From: Unidata IDD Support <address@hidden>
Sent: Monday, August 5, 2019 3:30 PM
To: Pete Pokrandt <address@hidden>
Subject: [IDD #ISJ-658184]: NIMAGE feed - big lags from idd.unidata starting 
~16 UTC Monday August 5?

Hi Pete,

On 8/5/19 1:46 PM, Pete Pokrandt wrote:
New Ticket: NIMAGE feed - big lags from idd.unidata starting ~16 UTC Monday 
August 5?

Looks like our lag pulling NIMAGE from idd.unidata.ucar.edu went way up starting
arount 16 UTC today, to the point where we were not getting any data on NIMAGE.
I failed over to iddb and we're getting data from that. Taking a while for the
lag to subside. Any idea why we might have stopped getting data from idd?

We are experiencing a number of unexpected latency-related issues
on the more heavily loaded real-server backends of our top level
IDD relay, idd.unidata.ucar.edu.  The exact cause of the increased
latencies, a number of which appear to be feed type related, is not
known, but we have been speculating that it may have something to
do with the overall performance drop on the backends after installing
the latest firmware that should have fixes for Intel CPU bugs.  Some
of the comments we have seen on the impact that come with the firmware
updates suggest performance hits by as much as 30%.  If these are
true, it may be suggesting that we need to add more real-server backends
to the idd.unidata.ucar.edu cluster so that the number of downstream
feeds being serviced is decreased on any individual backend machine.

This past weekend, Mike sent a note out to you and other ldm-users
list subscribers advising to switch to the backup cluster,
iddb.unidata.ucar.edu or to the cluster that will eventually become
idd.unidata.ucar.edu, iddc.unidata.ucar.edu.

I hope that this helps!

Cheers,

Tom
--
+----------------------------------------------------------------------+
* Tom Yoksas                                      UCAR Unidata Program *
* (303) 497-8642 (last resort)                           P.O. Box 3000 *
* address@hidden                                    Boulder, CO 80307 *
* Unidata WWW Service                     http://www.unidata.ucar.edu/ *
+----------------------------------------------------------------------+



Ticket Details
===================
Ticket ID: ISJ-658184
Department: Support IDD
Priority: Normal
Status: Open
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.





Ticket Details
===================
Ticket ID: ISJ-658184
Department: Support IDD
Priority: Normal
Status: Open
Link:  
https://andy.unidata.ucar.edu/esupport/staff/index.php?_m=tickets&_a=viewticket&ticketid=30611


--
+----------------------------------------------------------------------+
* Tom Yoksas                                      UCAR Unidata Program *
* (303) 497-8642 (last resort)                           P.O. Box 3000 *
* address@hidden                                    Boulder, CO 80307 *
* Unidata WWW Service                     http://www.unidata.ucar.edu/ *
+----------------------------------------------------------------------+



Ticket Details
===================
Ticket ID: ISJ-658184
Department: Support IDD
Priority: Normal
Status: Open
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.





Ticket Details
===================
Ticket ID: ISJ-658184
Department: Support IDD
Priority: Normal
Status: Open
Link:  
https://andy.unidata.ucar.edu/esupport/staff/index.php?_m=tickets&_a=viewticket&ticketid=30611


--
+----------------------------------------------------------------------+
* Tom Yoksas                                      UCAR Unidata Program *
* (303) 497-8642 (last resort)                           P.O. Box 3000 *
* address@hidden                                    Boulder, CO 80307 *
* Unidata WWW Service                     http://www.unidata.ucar.edu/ *
+----------------------------------------------------------------------+