If your have 'mtr' on the NCEP network run it to the LDM downstream feeds to
see if there are any spikes. Basic ping / ldmping test can rule in or network
latency.
Also if they can or you can set up iperf (super simple) and have the other site
run a bandwidth test you can test the throughput to rule that in/out as an
issue.
If th network checks clear, I would look closer at the LDM config, especially
the queues. Unidata can speak more to that and also splitting the downstream
feeds using regex patterns.
- Mike
Sent from Outlook
On Fri, Nov 13, 2015 at 11:53 AM -0800, "Michael Shedlock"
<michael.shedlock@xxxxxxxx> wrote:
All,
NCEP is indeed on internet2, which I presume would apply here.
A couple of noteworthy things.... I see some latency, but not for
everyone, and it doesn't seem to matter which conduit machine a
client is connected to. For example, with today's and yesterday's
gfs.t12z.pgrb2.0p25.f096 (hour 96) file here are the latencies per
client that I see:
11/12
Wisconsin: A few seconds
Unidata/UCAR: A few seconds
UIUC: 13 minutes
PSU: 27 minutes
11/13:
Wisconsin: A few seconds
Unidata/UCAR: A few seconds
UIUC: 2.33 minutes
PSU: 2.75 minutes
Another correlation is that UIUC and PSU (the ones with latency) are
only using one thread to connect to our conduit, whereas Wisc. and
Unidata use multiple threads.
At the moment this sort of has the appearance of a bottleneck
outside of NCEP. It might also be useful to see traceroutes from
UIUC and PSU to NCEP's CONDUIT. I know I saw some traceroutes
below. Can you try that and share with us?
Mike Shedlock
NCEP Central Operations
Dataflow Team
301.683.3834
On 11/13/2015 11:42 AM, Mike Dross
wrote:
My $ 0.02 from having works with LDM since the mid 90's.
I assume NCEP is not on internet2?
If so bandwidth shouldn't be an issue. Regardless I would check
the traceroutes to ensure a good path, high bandwidth, low
latency. Basic network topology check. I am sure you have done
this.
An iperf test is a simple way to test
the maximum throughput to see if bandwidth is an issue. If
that's not it, high latency or the way LDM is set up on the
upstream side is likely the culprit.
Mike
Sent from my iPad
On Nov 13, 2015, at 10:05 AM, Arthur A Person <aap1@xxxxxxx>
wrote:
Carissa,
Yes, still issues. There was a period several weeks
ago when throughput was clean, but recently we've seen
delays to varying degrees.
Based on the Unidata latency chart from our reported
statistics
(http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+iddrs2a.meteo.psu.edu),
we've seen delays during 0.25 degree gfs transmission
that range from 500 seconds
to 3500 seconds over the past couple of days.
Also, comparison with
charts from other schools seem to show better reception when
feeding from "conduit1" rather
than "conduit2".
Does this mean anything
to you or is it purely coincidence or incidental?
Thanks for any insights you can provide.
Art
From: "Carissa
Klemmer - NOAA Federal" <carissa.l.klemmer@xxxxxxxx>
To: "Arthur A Person" <aap1@xxxxxxx>,
"_NCEP.List.pmb-dataflow" <ncep.list.pmb-dataflow@xxxxxxxx>
Cc: "support-conduit@xxxxxxxxxxxxxxxx"
<conduit@xxxxxxxxxxxxxxxx>,
"Pete Pokrandt" <poker@xxxxxxxxxxxx>,
"Michael Schmidt" <mschmidt@xxxxxxxx>,
"Bentley, Alicia M" <ambentley@xxxxxxxxxx>,
"Daes Support" <daessupport@xxxxxxxxxx>
Sent: Friday, November 13, 2015 9:26:28 AM
Subject: Re: [conduit] How's your GFS?
Art,
I am going to add our team to this thread. Are you
still seeing issues. Is so we will take a look and
see if we can tell if anything on our side is
happening around FH 96.
Carissa
Klemmer
NCEP
Central Operations
Dataflow
Team Lead
301-683-3835
On Thu, Nov 5, 2015 at 4:23
PM, Arthur A Person <aap1@xxxxxxx>
wrote:
Hi all...
Conduit latencies have crept upward
again for the past few weeks... not
unbearable, but still significant.
At first it seemed to only affect us,
but it looks like it's affecting UIUC now
also, but not so much Wisconsin.
Inspecting our logs, we've noticed that
there's no delay out to about 90 hours of
gfs transmission, but
starting at 96 hours, the delays start
to ramp up steadily. I'm not sure how to
explain that unless something
else starts transmitting during that
time that competes for bandwidth. Also, I
notice that sites receiving data
from "conduit1" seem to be faring
better than "conduit2". Is there any
difference between these two
originating systems or is that just
coincidental? Anyone have anything new to
report on this issue?
Thanks... Art
From: "Pete Pokrandt"
<poker@xxxxxxxxxxxx>
To: "Carissa Klemmer - NOAA
Federal" <carissa.l.klemmer@xxxxxxxx>,
mschmidt@xxxxxxxx
Cc: "Bentley, Alicia M"
<ambentley@xxxxxxxxxx>,
"Daes Support" <daessupport@xxxxxxxxxx>,
"support-conduit@xxxxxxxxxxxxxxxx"
<conduit@xxxxxxxxxxxxxxxx>
Sent: Thursday, September
24, 2015 1:29:59 PM
Subject: Re: [conduit] How's
your GFS?
Here are traceroutes from idd.aos.wisc.edu
to conduit.ncep.noaa.gov
and ncepldm4.woc.noaa.gov
taken at 17:17 UTC, right in the
middle of the 18 UTC GFS lag
spike today.
[ldm@idd ~/etc]$ traceroute conduit.ncep.noaa.gov
traceroute to conduit.ncep.noaa.gov
(140.90.101.42), 30 hops max, 60
byte packets
1
r-cssc-b280c-1-core-vlan-510-primary.net.wisc.edu
(144.92.130.3) 0.833 ms 0.819
ms 0.855 ms
2 internet2-ord-600w-100G.net.wisc.edu
(144.92.254.229) 18.077 ms
18.095 ms 18.067 ms
3
et-7-0-0.115.rtr.wash.net.internet2.edu
(198.71.45.57) 35.125 ms
35.278 ms 35.261 ms
4 198.71.45.228
(198.71.45.228) 35.378 ms
35.368 ms 35.335 ms
5 ae0.clpk-core.maxgigapop.net
(206.196.178.81) 36.401 ms
36.408 ms 36.284 ms
6 noaa-rtr.maxgigapop.net
(206.196.177.118) 36.523 ms
36.640 ms 36.411 ms
7 140.90.111.36
(140.90.111.36) 68.769 ms
52.236 ms 52.210 ms
8 140.90.76.69 (140.90.76.69)
36.602 ms 36.503 ms 36.827 ms
9 * * *
10 * * *
...
[ldm@idd ~/etc]$ traceroute ncepldm4.woc.noaa.gov
traceroute to ncepldm4.woc.noaa.gov
(140.172.17.205), 30 hops max,
60 byte packets
1
r-cssc-b280c-1-core-vlan-510-primary.net.wisc.edu
(144.92.130.3) 0.838 ms 0.847
ms 0.822 ms
2 internet2-ord-600w-100G.net.wisc.edu
(144.92.254.229) 18.513 ms
18.506 ms 18.484 ms
3 ae0.3454.core-l3.frgp.net
(192.43.217.223) 40.245 ms
40.204 ms 40.123 ms
4 noaa-i2.frgp.net
(128.117.243.11) 43.617 ms
43.544 ms 43.699 ms
5 2001-mlx8-eth-1-2.boulder.noaa.gov
(140.172.2.18) 40.960 ms
40.951 ms 41.058 ms
6 mdf-rtr-6.boulder.noaa.gov
(140.172.6.251) 46.516 ms
40.962 ms 40.876 ms
7 * * *
8 * * *
...
--
Pete Pokrandt - Systems
Programmer
UW-Madison Dept of
Atmospheric and Oceanic
Sciences
608-262-3086 - poker@xxxxxxxxxxxx
From:
conduit-bounces@xxxxxxxxxxxxxxxx
<conduit-bounces@xxxxxxxxxxxxxxxx>
on behalf of Carissa Klemmer
- NOAA Federal
<carissa.l.klemmer@xxxxxxxx>
Sent: Thursday,
September 24, 2015 10:36 AM
To: mschmidt@xxxxxxxx
Cc: Bentley, Alicia
M; Daes Support;
support-conduit@xxxxxxxxxxxxxxxx
Subject: Re:
[conduit] How's your GFS?
Mike,
Can
you provide what server
you are coming from. I
know your range, but I
need to provide to the
helpdesk what is primary
right now so they can
trace back.
Carissa
Klemmer
NCEP
Central
Operations
Dataflow
Team Lead
301-683-3835
On
Thu, Sep 24, 2015 at 9:57
AM, Mike Schmidt
<mschmidt@xxxxxxxx>
wrote:
Hi Carissa,
We've seen the
same jump in
latencies;
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+daffy.unidata.ucar.edu
Here's our
traceroute;
# traceroute
conduit.ncep.noaa.gov
traceroute to
conduit.ncep.noaa.gov
(140.90.101.42),
30 hops max, 60
byte packets
1
flra-n156.unidata.ucar.edu
(128.117.156.253)
0.352 ms 0.344 ms
0.325 ms
2
tcom-gs-1-n243-80.ucar.edu
(128.117.243.85)
0.558 ms 0.584 ms
0.662 ms
3
xe-0-1-2.873.core-l3.frgp.net
(128.117.243.9)
1.138 ms 1.126 ms
1.107 ms
4 v3454.rtr-chic.frgp.net
(192.43.217.222)
23.227 ms 23.296
ms 23.278 ms
5
et-7-0-0.115.rtr.wash.net.internet2.edu
(198.71.45.57)
40.421 ms 40.408
ms 40.340 ms
6 198.71.45.228
(198.71.45.228)
40.488 ms 40.649
ms 40.624 ms
7
ae0.clpk-core.maxgigapop.net
(206.196.178.81)
41.545 ms 41.602
ms 41.170 ms
8 noaa-rtr.maxgigapop.net
(206.196.177.118)
41.796 ms 41.507
ms 41.592 ms
9 140.90.111.36
(140.90.111.36)
41.419 ms 41.496
ms 41.623 ms
10 140.90.76.69
(140.90.76.69)
41.900 ms 41.728
ms 41.956 ms
mike
On
Thu, Sep 24,
2015 at 7:49 AM,
Carissa Klemmer
- NOAA Federal
<carissa.l.klemmer@xxxxxxxx>
wrote:
Hi
all,
I
have opened a
ticket with
our helpdesk
and included
PSU
traceroute.
But can I get
a better
handle on all
the paths that
are seeing
latencies to
conduit.ncep.noaa.gov?
Is both PSU
and WISC
seeing spikes?
Can I get a
WISC
traceroute
also please?
Thanks,
Carissa
Klemmer
NCEP
Central
Operations
Dataflow
Team Lead
301-683-3835
On
Thu, Sep 24,
2015 at 8:19
AM, Arthur A
Person
<aap1@xxxxxxx>
wrote:
Pete,
I was
thinking that
too! If I
only hadn't
sent that
email... :)
Anyway,
the delays
aren't as bad
as they were
(at least
here), but are
still
indicative of
a
lurking
problem.
Almost seems
as though some
packet shaping
is going on,
as Tom
suggested
previously.
Maybe paths
get overloaded
and something
kicks in and
meters-out
usage???
Just speculating.
I've asked our
network folks
here to see if
they can
investigate
our
path to NCEP,
but that may
take awhile.
Our traceroute
from this
morning at
1113Z is:
[ldm@iddrs1a
~]$ traceroute
conduit.ncep.noaa.gov
traceroute to
conduit.ncep.noaa.gov
(140.90.101.42),
30 hops max,
60 byte
packets
1 172.29.0.66
(172.29.0.66)
0.882 ms
192.5.158.1
(192.5.158.1)
0.278 ms 0.264
ms
2
Blue1-ethernet3-1.gw.psu.edu
(172.30.5.178)
0.220 ms
White1-ethernet3-1.gw.psu.edu
(172.30.5.177)
0.530 ms 0.526
ms
3
Windstream1-ethernet2-1.gw.psu.edu
(172.30.5.106)
0.385 ms
Telecom5-ethernet2-2.gw.psu.edu
(172.30.5.102)
0.370 ms
Windstream1-ethernet3-2.gw.psu.edu
(172.30.5.114)
0.391 ms
4
Telecom5-ethernet2-1.gw.psu.edu
(172.30.8.10)
0.391 ms 0.408
ms
et-8-0-0.2364.rtr.chic.net.internet2.edu
(64.57.30.2)
15.149 ms
5
et-7-0-0.115.rtr.wash.net.internet2.edu
(198.71.45.57)
32.276 ms
et-8-0-0.2364.rtr.chic.net.internet2.edu
(64.57.30.2)
15.301 ms
et-7-0-0.115.rtr.wash.net.internet2.edu
(198.71.45.57)
32.594 ms
6
198.71.45.228
(198.71.45.228)
32.423 ms
et-7-0-0.115.rtr.wash.net.internet2.edu (198.71.45.57) 32.431 ms
198.71.45.228
(198.71.45.228)
32.843 ms
7
198.71.45.228
(198.71.45.228)
32.853 ms
ae0.clpk-core.maxgigapop.net (206.196.178.81) 33.407 ms 33.401 ms
8
ae0.clpk-core.maxgigapop.net
(206.196.178.81)
33.858 ms
noaa-rtr.maxgigapop.net
(206.196.177.118)
33.483 ms
ae0.clpk-core.maxgigapop.net
(206.196.178.81)
33.515 ms
9
140.90.111.36
(140.90.111.36)
33.574 ms
33.545 ms
noaa-rtr.maxgigapop.net (206.196.177.118) 33.907 ms
10
140.90.76.69
(140.90.76.69)
34.220 ms
34.012 ms
33.901 ms
The above
was taken
while we were
running about
1000 seconds
behind. A
recent change
here
(9/16/2015) was to
direct our
first hop to
Chicago
instead of Pittsburgh
(3rox) which
is now a 100
Gbit
link.
Tests to UCAR
at that time
were showing
1.38 Gbps
throughput
with reduced
latencies.
Since
our
delays now are
not as
bad as they
were
previously,
perhaps this
has helped.
However, there
may
still be
a choke
point further
down the line
at maxgigapop
or internal to
NCEP itself.
perfsonar
monitoring
to NCEP
would be
useful... does
anyone have
any tests to
NCEP
currently running?
Can we
identify
any
common path
segments in
the trace
route above?
Art
From: "Pete
Pokrandt"
<poker@xxxxxxxxxxxx>
To: "Arthur
A Person"
<aap1@xxxxxxx>,
"support-conduit@xxxxxxxxxxxxxxxx"
<conduit@xxxxxxxxxxxxxxxx>
Cc: "Bentley,
Alicia M"
<ambentley@xxxxxxxxxx>,
"Daes Support"
<daessupport@xxxxxxxxxx>,
"Carissa
Klemmer - NOAA
Federal"
<carissa.l.klemmer@xxxxxxxx>,
"Tyle, Kevin
R" <ktyle@xxxxxxxxxx>
Sent: Thursday,
September 24,
2015 1:25:31
AM
Subject: Re:
[conduit]
How's your
GFS?
Art,
Looks like
you spoke too
soon. Big lags
~1000 secs
started up
again with
today's 12 UTC
cycle. Very
mysterious..
They are
showing up on
our feed and
consequently
downstream
from us at
Albny.
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+install.atmos.albany.edu
<sigh..>
Pete
--
Pete Pokrandt
- Systems
Programmer
UW-Madison
Dept of
Atmospheric
and Oceanic
Sciences
608-262-3086 -
poker@xxxxxxxxxxxx
From:
Arthur A
Person <aap1@xxxxxxx>
Sent:
Monday,
September 21,
2015 10:14 AM
To:
support-conduit@xxxxxxxxxxxxxxxx
Cc:
Bentley,
Alicia M; Daes
Support;
Carissa
Klemmer - NOAA
Federal; Pete
Pokrandt;
Tyle, Kevin R
Subject:
Re: [conduit]
How's your
GFS?
Folks,
Looks like
something
changed late
on Friday
in the network
paths
affecting Penn
State and the
other
universities
feeding
CONDUIT from
NCEP... delays
have dropped
to a crisp 30
seconds or
less. Does
anyone know if
a problem was
found/fixed?
I know some
issues were
addressed at
Penn State
with some
issues still
being
worked-on.
Back in the
first week of
September the
feeds were
good and then
degraded...
just want to
make sure that
doesn't happen
again before I
re-enable
ingest of the
gfs 0.25
degree data.
Art
From: "Arthur
A Person"
<aap1@xxxxxxx>
To:
"support-conduit@xxxxxxxxxxxxxxxx"
<conduit@xxxxxxxxxxxxxxxx>
Cc: "Bentley,
Alicia M"
<ambentley@xxxxxxxxxx>,
"Daes Support"
<daessupport@xxxxxxxxxx>,
"Carissa
Klemmer - NOAA
Federal"
<carissa.l.klemmer@xxxxxxxx>
Sent: Wednesday,
September 9,
2015 4:27:26
PM
Subject: Re:
[conduit]
How's your
GFS?
Just a
heads up...
I've
reconfigured
our IDD relay
to distribute
conduit
without the
gfs 0.25
degree data
until we get
our latencies
under
control.
We've got some
issues
internal to
Penn State
creating
problems on
top of any
external
issues and our
conduit feed
is useless the
way it is at
the moment.
By reverting
to conduit
without gfs
0.25,
hopefully
we'll maintain
a useful
stream. As
soon as the
latencies are
addressed, I
will
reintroduce
the gfs 0.25.
Art
From: "Arthur
A Person"
<aap1@xxxxxxx>
To: "Carissa
Klemmer - NOAA
Federal"
<carissa.l.klemmer@xxxxxxxx>
Cc: "Bentley,
Alicia M"
<ambentley@xxxxxxxxxx>,
"Daes Support"
<daessupport@xxxxxxxxxx>,
"support-conduit@xxxxxxxxxxxxxxxx"
<conduit@xxxxxxxxxxxxxxxx>
Sent: Wednesday,
September 9,
2015 8:00:19
AM
Subject: Re:
[conduit]
How's your
GFS?
All,
From: "Carissa
Klemmer - NOAA
Federal"
<carissa.l.klemmer@xxxxxxxx>
To: "Arthur
A Person"
<aap1@xxxxxxx>
Cc: "Tyle,
Kevin R"
<ktyle@xxxxxxxxxx>,
"Bentley,
Alicia M"
<ambentley@xxxxxxxxxx>,
"Daes Support"
<daessupport@xxxxxxxxxx>,
"support-conduit@xxxxxxxxxxxxxxxx"
<conduit@xxxxxxxxxxxxxxxx>
Sent: Tuesday,
September 8,
2015 10:19:06
PM
Subject: Re:
[conduit]
How's your
GFS?
All,
NCEP
is not making
any active
changes to our
networks that
should affect
your
latencies,
especially not
over a
weekend. I am
not aware of
any changes
that occurred
over the
holiday that
would have
impacted these
networks. This
is likely
downstream of
NCEP control
which is why
you see the
latencies come
and go.
Okay... I
guess my
interpretation
was wrong,
then. My
apologies.
There does
seem to be a
problem pretty
close to NCEP,
however, since
the latencies
seem to come
and go at all
top-tier
sites...
although not
all sites are
the same (ours
seems to be
the highest).
Maybe we're
pushing the
long-haul
connectivity
to the limit
and multiple
choke points
are showing
up? Time to
get our
networking
folks more
involved...
Art
Carissa
Klemmer
NCEP
Central
Operations
Production
Management
Branch
Dataflow Team
301-683-3835
On
Tue, Sep 8,
2015 at 1:21
PM, Arthur A
Person
<aap1@xxxxxxx>
wrote:
We appear
to have had
gfs reception
problems with
0Z and 6Z runs
last night.
After
implementation
of the
0.25 degree
gfs,
CONDUIT latencies
were very
large across
all sites
during 0.25
degree data
transmission, but
a week-or-so
ago dropped to
negligible
levels. Over
the weekend
they jumped
back up
again. I
interpret this
to mean NCEP
is tinkering
with network
paths trying
to find an
effective
way to
get these
huge bursts
of data out to
the downstream
sites. The
gfs data loss
last night may
have
been from
the large
latencies or
from other
unrelated
delivery
problems...
dunno...
Art
From: "Tyle,
Kevin R"
<ktyle@xxxxxxxxxx>
To: "Pete
Pokrandt"
<poker@xxxxxxxxxxxx>
Cc: "Bentley,
Alicia M"
<ambentley@xxxxxxxxxx>,
"Daes Support"
<daessupport@xxxxxxxxxx>,
conduit@xxxxxxxxxxxxxxxx
Sent: Tuesday,
September 8,
2015 1:00:10
PM
Subject: Re:
[conduit]
How's your
GFS?
Hi
Pete, et al.:
We
here at
UAlbany
continue to
get spotty
reception of
the GFS since
00Z today …
anyone else
having
issues? We
feed from
Madison and
State College.
Earlier
thread below:
-------------------------
Yeah,
I’m not
surprised that
the addition
of the ¼ deg
GFS is causing
the need for a
bigger queue
(and likely
burlier
machine).
That’s the
main reason I
have resisted
requesting it.
I’ll
fix the issue
that makes
ldmstats show
“install.atmos…”
instead of
“cascade.atmos…”
Something
else must be
at play since
the ¼ GFS has
been flowing
for several
weeks now
without
incident,
likely tied to
the increased
latency you
starting
seeing.
Looks
like we only
got the GFS
through 60
hours today
with the 12Z
run, so
something
definitely
appears to be
amiss … I’ll
cc: the
conduit list
to see if
anyone else is
noticing
problems.
_____________________________________________
Kevin Tyle,
Systems
Administrator
Dept. of
Atmospheric
&
Environmental
Sciences
University at
Albany
Earth Science
235, 1400
Washington
Avenue
Albany, NY
12222
Email:
ktyle@xxxxxxxxxx
Phone: 518-442-4578
_____________________________________________
From:
Pete Pokrandt
[mailto:poker@xxxxxxxxxxxx]
Sent:
Tuesday,
September 08,
2015 12:17 PM
To:
Tyle, Kevin R
<ktyle@xxxxxxxxxx>
Cc:
Daes Support
<daessupport@xxxxxxxxxx>
Subject:
Re: How's your
GFS?
My
GFS appears to
be complete,
but I do see
that
something's
going on with
our feed - the
latencies
jumped way up
somewhere over
the weekend:
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu
You're seeing
the same, and
increased yet
from your feed
from Penn
State (at
least to the
machine
'install.atmos.albany.edu
- I don't see
any stats
reported from
cascade, which
is what it
looks like you
are feeding
from me on)
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+install.atmos.albany.edu
I think I need
to buy more
memory and
keep a larger
queue on
idd.aos.wisc.edu with the 0.25 deg GFS coming in. There are times
where my queue
only holds
about 20
minutes of
data, which is
likely
contributing
to your
incomplete GFS
files..
Here's what my
0.5 deg (the
gblav2.*
files) and the
0.25 deg
(gblav0p25 out
to 87 h) look
like for the
00 ant 06 UTC
runs today
0.5 deg:
-rw-r--r--. 1
ldm ldm
60953687 Sep
7 22:25
/data/grib2/gblav2.15090800_F000
-rw-r--r--. 1
ldm ldm
66996066 Sep
7 22:28
/data/grib2/gblav2.15090800_F003
-rw-r--r--. 1
ldm ldm
67902041 Sep
7 22:30
/data/grib2/gblav2.15090800_F006
-rw-r--r--. 1
ldm ldm
67961293 Sep
7 22:32
/data/grib2/gblav2.15090800_F009
-rw-r--r--. 1
ldm ldm
68081826 Sep
7 22:35
/data/grib2/gblav2.15090800_F012
-rw-r--r--. 1
ldm ldm
68710398 Sep
7 22:35
/data/grib2/gblav2.15090800_F015
-rw-r--r--. 1
ldm ldm
69664268 Sep
7 22:36
/data/grib2/gblav2.15090800_F018
-rw-r--r--. 1
ldm ldm
69177180 Sep
7 22:38
/data/grib2/gblav2.15090800_F021
-rw-r--r--. 1
ldm ldm
69816235 Sep
7 22:38
/data/grib2/gblav2.15090800_F024
-rw-r--r--. 1
ldm ldm
69010253 Sep
7 22:39
/data/grib2/gblav2.15090800_F027
-rw-r--r--. 1
ldm ldm
69786985 Sep
7 22:40
/data/grib2/gblav2.15090800_F030
-rw-r--r--. 1
ldm ldm
68876266 Sep
7 22:41
/data/grib2/gblav2.15090800_F033
-rw-r--r--. 1
ldm ldm
69376601 Sep
7 22:42
/data/grib2/gblav2.15090800_F036
-rw-r--r--. 1
ldm ldm
69029846 Sep
7 22:43
/data/grib2/gblav2.15090800_F039
-rw-r--r--. 1
ldm ldm
69142392 Sep
7 22:44
/data/grib2/gblav2.15090800_F042
-rw-r--r--. 1
ldm ldm
68990399 Sep
7 22:45
/data/grib2/gblav2.15090800_F045
-rw-r--r--. 1
ldm ldm
69343366 Sep
7 22:46
/data/grib2/gblav2.15090800_F048
-rw-r--r--. 1
ldm ldm
69150894 Sep
7 22:47
/data/grib2/gblav2.15090800_F051
-rw-r--r--. 1
ldm ldm
69504675 Sep
7 22:47
/data/grib2/gblav2.15090800_F054
-rw-r--r--. 1
ldm ldm
69196832 Sep
7 22:48
/data/grib2/gblav2.15090800_F057
-rw-r--r--. 1
ldm ldm
69335487 Sep
7 22:50
/data/grib2/gblav2.15090800_F060
-rw-r--r--. 1
ldm ldm
69261676 Sep
7 22:50
/data/grib2/gblav2.15090800_F063
-rw-r--r--. 1
ldm ldm
69166068 Sep
7 22:51
/data/grib2/gblav2.15090800_F066
-rw-r--r--. 1
ldm ldm
69054105 Sep
7 22:53
/data/grib2/gblav2.15090800_F069
-rw-r--r--. 1
ldm ldm
68895264 Sep
7 22:54
/data/grib2/gblav2.15090800_F072
-rw-r--r--. 1
ldm ldm
69202038 Sep
7 22:56
/data/grib2/gblav2.15090800_F075
-rw-r--r--. 1
ldm ldm
69339334 Sep
7 22:56
/data/grib2/gblav2.15090800_F078
-rw-r--r--. 1
ldm ldm
69181930 Sep
7 22:57
/data/grib2/gblav2.15090800_F081
-rw-r--r--. 1
ldm ldm
69674148 Sep
7 22:58
/data/grib2/gblav2.15090800_F084
-rw-r--r--. 1
ldm ldm
69383769 Sep
7 22:58
/data/grib2/gblav2.15090800_F087
-rw-r--r--. 1
ldm ldm
69645526 Sep
7 22:59
/data/grib2/gblav2.15090800_F090
-rw-r--r--. 1
ldm ldm
69119323 Sep
7 23:00
/data/grib2/gblav2.15090800_F093
-rw-r--r--. 1
ldm ldm
69363296 Sep
7 23:01
/data/grib2/gblav2.15090800_F096
-rw-r--r--. 1
ldm ldm
69030287 Sep
7 23:03
/data/grib2/gblav2.15090800_F099
-rw-r--r--. 1
ldm ldm
69819322 Sep
7 23:03
/data/grib2/gblav2.15090800_F102
-rw-r--r--. 1
ldm ldm
69498561 Sep
7 23:04
/data/grib2/gblav2.15090800_F105
-rw-r--r--. 1
ldm ldm
69690447 Sep
7 23:05
/data/grib2/gblav2.15090800_F108
-rw-r--r--. 1
ldm ldm
69274213 Sep
7 23:06
/data/grib2/gblav2.15090800_F111
-rw-r--r--. 1
ldm ldm
70089206 Sep
7 23:07
/data/grib2/gblav2.15090800_F114
-rw-r--r--. 1
ldm ldm
70007688 Sep
7 23:08
/data/grib2/gblav2.15090800_F117
-rw-r--r--. 1
ldm ldm
70237308 Sep
7 23:08
/data/grib2/gblav2.15090800_F120
-rw-r--r--. 1
ldm ldm
69849708 Sep
7 23:09
/data/grib2/gblav2.15090800_F123
-rw-r--r--. 1
ldm ldm
69883550 Sep
7 23:11
/data/grib2/gblav2.15090800_F126
-rw-r--r--. 1
ldm ldm
69586365 Sep
7 23:11
/data/grib2/gblav2.15090800_F129
-rw-r--r--. 1
ldm ldm
70110782 Sep
7 23:12
/data/grib2/gblav2.15090800_F132
-rw-r--r--. 1
ldm ldm
69430545 Sep
7 23:13
/data/grib2/gblav2.15090800_F135
-rw-r--r--. 1
ldm ldm
69461630 Sep
7 23:14
/data/grib2/gblav2.15090800_F138
-rw-r--r--. 1
ldm ldm
69264487 Sep
7 23:15
/data/grib2/gblav2.15090800_F141
-rw-r--r--. 1
ldm ldm
69553206 Sep
7 23:16
/data/grib2/gblav2.15090800_F144
-rw-r--r--. 1
ldm ldm
68924371 Sep
7 23:17
/data/grib2/gblav2.15090800_F147
-rw-r--r--. 1
ldm ldm
69191965 Sep
7 23:17
/data/grib2/gblav2.15090800_F150
-rw-r--r--. 1
ldm ldm
68639462 Sep
7 23:19
/data/grib2/gblav2.15090800_F153
-rw-r--r--. 1
ldm ldm
69035706 Sep
7 23:22
/data/grib2/gblav2.15090800_F156
-rw-r--r--. 1
ldm ldm
68831618 Sep
7 23:25
/data/grib2/gblav2.15090800_F159
-rw-r--r--. 1
ldm ldm
69428952 Sep
7 23:27
/data/grib2/gblav2.15090800_F162
-rw-r--r--. 1
ldm ldm
69514672 Sep
7 23:28
/data/grib2/gblav2.15090800_F165
-rw-r--r--. 1
ldm ldm
69614097 Sep
7 23:29
/data/grib2/gblav2.15090800_F168
-rw-r--r--. 1
ldm ldm
69404524 Sep
7 23:29
/data/grib2/gblav2.15090800_F171
-rw-r--r--. 1
ldm ldm
69534566 Sep
7 23:30
/data/grib2/gblav2.15090800_F174
-rw-r--r--. 1
ldm ldm
69528455 Sep
7 23:31
/data/grib2/gblav2.15090800_F177
-rw-r--r--. 1
ldm ldm
69747643 Sep
7 23:31
/data/grib2/gblav2.15090800_F180
-rw-r--r--. 1
ldm ldm
69397125 Sep
7 23:32
/data/grib2/gblav2.15090800_F183
-rw-r--r--. 1
ldm ldm
69973323 Sep
7 23:32
/data/grib2/gblav2.15090800_F186
-rw-r--r--. 1
ldm ldm
69070113 Sep
7 23:33
/data/grib2/gblav2.15090800_F189
-rw-r--r--. 1
ldm ldm
69586837 Sep
7 23:34
/data/grib2/gblav2.15090800_F192
-rw-r--r--. 1
ldm ldm
69202267 Sep
7 23:34
/data/grib2/gblav2.15090800_F195
-rw-r--r--. 1
ldm ldm
69169373 Sep
7 23:35
/data/grib2/gblav2.15090800_F198
-rw-r--r--. 1
ldm ldm
68193948 Sep
7 23:36
/data/grib2/gblav2.15090800_F201
-rw-r--r--. 1
ldm ldm
67963148 Sep
7 23:38
/data/grib2/gblav2.15090800_F204
-rw-r--r--. 1
ldm ldm
67689203 Sep
7 23:39
/data/grib2/gblav2.15090800_F207
-rw-r--r--. 1
ldm ldm
68079977 Sep
7 23:41
/data/grib2/gblav2.15090800_F210
-rw-r--r--. 1
ldm ldm
68931672 Sep
7 23:43
/data/grib2/gblav2.15090800_F213
-rw-r--r--. 1
ldm ldm
68749459 Sep
7 23:46
/data/grib2/gblav2.15090800_F216
-rw-r--r--. 1
ldm ldm
68739072 Sep
7 23:46
/data/grib2/gblav2.15090800_F219
-rw-r--r--. 1
ldm ldm
68789427 Sep
7 23:47
/data/grib2/gblav2.15090800_F222
-rw-r--r--. 1
ldm ldm
68031035 Sep
7 23:48
/data/grib2/gblav2.15090800_F225
-rw-r--r--. 1
ldm ldm
68735199 Sep
7 23:48
/data/grib2/gblav2.15090800_F228
-rw-r--r--. 1
ldm ldm
65347330 Sep
7 23:49
/data/grib2/gblav2.15090800_F231
-rw-r--r--. 1
ldm ldm
65891902 Sep
7 23:49
/data/grib2/gblav2.15090800_F234
-rw-r--r--. 1
ldm ldm
65383729 Sep
7 23:50
/data/grib2/gblav2.15090800_F237
-rw-r--r--. 1
ldm ldm
66299227 Sep
7 23:50
/data/grib2/gblav2.15090800_F240
-rw-r--r--. 1
ldm ldm
64525715 Sep
7 23:52
/data/grib2/gblav2.15090800_F252
-rw-r--r--. 1
ldm ldm
64515690 Sep
7 23:53
/data/grib2/gblav2.15090800_F264
-rw-r--r--. 1
ldm ldm
63803271 Sep
7 23:53
/data/grib2/gblav2.15090800_F276
-rw-r--r--. 1
ldm ldm
63261621 Sep
7 23:54
/data/grib2/gblav2.15090800_F288
-rw-r--r--. 1
ldm ldm
64171542 Sep
7 23:54
/data/grib2/gblav2.15090800_F300
-rw-r--r--. 1
ldm ldm
64308576 Sep
7 23:56
/data/grib2/gblav2.15090800_F312
-rw-r--r--. 1
ldm ldm
64334459 Sep
7 23:58
/data/grib2/gblav2.15090800_F324
-rw-r--r--. 1
ldm ldm
64189700 Sep
7 23:59
/data/grib2/gblav2.15090800_F336
-rw-r--r--. 1
ldm ldm
63829248 Sep
7 23:59
/data/grib2/gblav2.15090800_F348
-rw-r--r--. 1
ldm ldm
64655803 Sep
8 00:00
/data/grib2/gblav2.15090800_F360
-rw-r--r--. 1
ldm ldm
64436657 Sep
8 00:07
/data/grib2/gblav2.15090800_F372
-rw-r--r--. 1
ldm ldm
64546095 Sep
8 00:12
/data/grib2/gblav2.15090800_F384
-rw-r--r--. 1
ldm ldm
61169101 Sep
8 04:26
/data/grib2/gblav2.15090806_F000
-rw-r--r--. 1
ldm ldm
67422108 Sep
8 04:28
/data/grib2/gblav2.15090806_F003
-rw-r--r--. 1
ldm ldm
68374534 Sep
8 04:31
/data/grib2/gblav2.15090806_F006
-rw-r--r--. 1
ldm ldm
68543418 Sep
8 04:33
/data/grib2/gblav2.15090806_F009
-rw-r--r--. 1
ldm ldm
69298218 Sep
8 04:35
/data/grib2/gblav2.15090806_F012
-rw-r--r--. 1
ldm ldm
69188133 Sep
8 04:36
/data/grib2/gblav2.15090806_F015
-rw-r--r--. 1
ldm ldm
69917655 Sep
8 04:37
/data/grib2/gblav2.15090806_F018
-rw-r--r--. 1
ldm ldm
69558566 Sep
8 04:38
/data/grib2/gblav2.15090806_F021
-rw-r--r--. 1
ldm ldm
69659459 Sep
8 04:38
/data/grib2/gblav2.15090806_F024
-rw-r--r--. 1
ldm ldm
69288102 Sep
8 04:40
/data/grib2/gblav2.15090806_F027
-rw-r--r--. 1
ldm ldm
68686968 Sep
8 04:40
/data/grib2/gblav2.15090806_F030
-rw-r--r--. 1
ldm ldm
68640234 Sep
8 04:42
/data/grib2/gblav2.15090806_F033
-rw-r--r--. 1
ldm ldm
69544506 Sep
8 04:42
/data/grib2/gblav2.15090806_F036
-rw-r--r--. 1
ldm ldm
68462036 Sep
8 04:43
/data/grib2/gblav2.15090806_F039
-rw-r--r--. 1
ldm ldm
69287354 Sep
8 04:44
/data/grib2/gblav2.15090806_F042
-rw-r--r--. 1
ldm ldm
69228412 Sep
8 04:45
/data/grib2/gblav2.15090806_F045
-rw-r--r--. 1
ldm ldm
69444769 Sep
8 04:46
/data/grib2/gblav2.15090806_F048
-rw-r--r--. 1
ldm ldm
69089036 Sep
8 04:47
/data/grib2/gblav2.15090806_F051
-rw-r--r--. 1
ldm ldm
69542812 Sep
8 04:48
/data/grib2/gblav2.15090806_F054
-rw-r--r--. 1
ldm ldm
69377775 Sep
8 04:49
/data/grib2/gblav2.15090806_F057
-rw-r--r--. 1
ldm ldm
69324867 Sep
8 04:50
/data/grib2/gblav2.15090806_F060
-rw-r--r--. 1
ldm ldm
69313464 Sep
8 04:51
/data/grib2/gblav2.15090806_F063
-rw-r--r--. 1
ldm ldm
69820155 Sep
8 04:52
/data/grib2/gblav2.15090806_F066
-rw-r--r--. 1
ldm ldm
69484687 Sep
8 04:52
/data/grib2/gblav2.15090806_F069
-rw-r--r--. 1
ldm ldm
69581997 Sep
8 04:53
/data/grib2/gblav2.15090806_F072
-rw-r--r--. 1
ldm ldm
69189693 Sep
8 04:54
/data/grib2/gblav2.15090806_F075
-rw-r--r--. 1
ldm ldm
69751906 Sep
8 04:55
/data/grib2/gblav2.15090806_F078
-rw-r--r--. 1
ldm ldm
69558875 Sep
8 04:56
/data/grib2/gblav2.15090806_F081
-rw-r--r--. 1
ldm ldm
69903084 Sep
8 04:58
/data/grib2/gblav2.15090806_F084
-rw-r--r--. 1
ldm ldm
69627748 Sep
8 04:59
/data/grib2/gblav2.15090806_F087
-rw-r--r--. 1
ldm ldm
69678696 Sep
8 04:59
/data/grib2/gblav2.15090806_F090
-rw-r--r--. 1
ldm ldm
69497446 Sep
8 05:00
/data/grib2/gblav2.15090806_F093
-rw-r--r--. 1
ldm ldm
69735442 Sep
8 05:01
/data/grib2/gblav2.15090806_F096
-rw-r--r--. 1
ldm ldm
69767861 Sep
8 05:02
/data/grib2/gblav2.15090806_F099
-rw-r--r--. 1
ldm ldm
70169785 Sep
8 05:03
/data/grib2/gblav2.15090806_F102
-rw-r--r--. 1
ldm ldm
69625644 Sep
8 05:04
/data/grib2/gblav2.15090806_F105
-rw-r--r--. 1
ldm ldm
69954293 Sep
8 05:05
/data/grib2/gblav2.15090806_F108
-rw-r--r--. 1
ldm ldm
69996186 Sep
8 05:06
/data/grib2/gblav2.15090806_F111
-rw-r--r--. 1
ldm ldm
70297897 Sep
8 05:06
/data/grib2/gblav2.15090806_F114
-rw-r--r--. 1
ldm ldm
70037957 Sep
8 05:08
/data/grib2/gblav2.15090806_F117
-rw-r--r--. 1
ldm ldm
69968183 Sep
8 05:08
/data/grib2/gblav2.15090806_F120
-rw-r--r--. 1
ldm ldm
69564905 Sep
8 05:10
/data/grib2/gblav2.15090806_F123
-rw-r--r--. 1
ldm ldm
69725865 Sep
8 05:11
/data/grib2/gblav2.15090806_F126
-rw-r--r--. 1
ldm ldm
69349475 Sep
8 05:11
/data/grib2/gblav2.15090806_F129
-rw-r--r--. 1
ldm ldm
69625604 Sep
8 05:12
/data/grib2/gblav2.15090806_F132
-rw-r--r--. 1
ldm ldm
69392152 Sep
8 05:15
/data/grib2/gblav2.15090806_F135
-rw-r--r--. 1
ldm ldm
69551134 Sep
8 05:18
/data/grib2/gblav2.15090806_F138
-rw-r--r--. 1
ldm ldm
69108820 Sep
8 05:19
/data/grib2/gblav2.15090806_F141
-rw-r--r--. 1
ldm ldm
69469618 Sep
8 05:19
/data/grib2/gblav2.15090806_F144
-rw-r--r--. 1
ldm ldm
68774645 Sep
8 05:20
/data/grib2/gblav2.15090806_F147
-rw-r--r--. 1
ldm ldm
69135260 Sep
8 05:20
/data/grib2/gblav2.15090806_F150
-rw-r--r--. 1
ldm ldm
69009857 Sep
8 05:21
/data/grib2/gblav2.15090806_F153
-rw-r--r--. 1
ldm ldm
69647753 Sep
8 05:21
/data/grib2/gblav2.15090806_F156
-rw-r--r--. 1
ldm ldm
69604259 Sep
8 05:22
/data/grib2/gblav2.15090806_F159
-rw-r--r--. 1
ldm ldm
69851358 Sep
8 05:22
/data/grib2/gblav2.15090806_F162
-rw-r--r--. 1
ldm ldm
69621423 Sep
8 05:23
/data/grib2/gblav2.15090806_F165
-rw-r--r--. 1
ldm ldm
69987289 Sep
8 05:24
/data/grib2/gblav2.15090806_F168
-rw-r--r--. 1
ldm ldm
70009168 Sep
8 05:24
/data/grib2/gblav2.15090806_F171
-rw-r--r--. 1
ldm ldm
70272431 Sep
8 05:25
/data/grib2/gblav2.15090806_F174
-rw-r--r--. 1
ldm ldm
69951044 Sep
8 05:26
/data/grib2/gblav2.15090806_F177
-rw-r--r--. 1
ldm ldm
70294466 Sep
8 05:28
/data/grib2/gblav2.15090806_F180
-rw-r--r--. 1
ldm ldm
69693077 Sep
8 05:31
/data/grib2/gblav2.15090806_F183
-rw-r--r--. 1
ldm ldm
70277595 Sep
8 05:35
/data/grib2/gblav2.15090806_F186
-rw-r--r--. 1
ldm ldm
70161497 Sep
8 05:36
/data/grib2/gblav2.15090806_F189
-rw-r--r--. 1
ldm ldm
70075264 Sep
8 05:37
/data/grib2/gblav2.15090806_F192
-rw-r--r--. 1
ldm ldm
69929971 Sep
8 05:37
/data/grib2/gblav2.15090806_F195
-rw-r--r--. 1
ldm ldm
69879151 Sep
8 05:38
/data/grib2/gblav2.15090806_F198
-rw-r--r--. 1
ldm ldm
69726455 Sep
8 05:38
/data/grib2/gblav2.15090806_F201
-rw-r--r--. 1
ldm ldm
70186834 Sep
8 05:38
/data/grib2/gblav2.15090806_F204
-rw-r--r--. 1
ldm ldm
69735649 Sep
8 05:39
/data/grib2/gblav2.15090806_F207
-rw-r--r--. 1
ldm ldm
70062469 Sep
8 05:40
/data/grib2/gblav2.15090806_F210
-rw-r--r--. 1
ldm ldm
69475211 Sep
8 05:40
/data/grib2/gblav2.15090806_F213
-rw-r--r--. 1
ldm ldm
69688060 Sep
8 05:41
/data/grib2/gblav2.15090806_F216
-rw-r--r--. 1
ldm ldm
69169089 Sep
8 05:42
/data/grib2/gblav2.15090806_F219
-rw-r--r--. 1
ldm ldm
69623322 Sep
8 05:42
/data/grib2/gblav2.15090806_F222
-rw-r--r--. 1
ldm ldm
69434126 Sep
8 05:43
/data/grib2/gblav2.15090806_F225
-rw-r--r--. 1
ldm ldm
69447710 Sep
8 05:44
/data/grib2/gblav2.15090806_F228
-rw-r--r--. 1
ldm ldm
69232930 Sep
8 05:44
/data/grib2/gblav2.15090806_F231
-rw-r--r--. 1
ldm ldm
69688395 Sep
8 05:45
/data/grib2/gblav2.15090806_F234
-rw-r--r--. 1
ldm ldm
69476983 Sep
8 05:47
/data/grib2/gblav2.15090806_F237
-rw-r--r--. 1
ldm ldm
70027781 Sep
8 05:50
/data/grib2/gblav2.15090806_F240
-rw-r--r--. 1
ldm ldm
64748968 Sep
8 05:52
/data/grib2/gblav2.15090806_F252
-rw-r--r--. 1
ldm ldm
64729059 Sep
8 05:53
/data/grib2/gblav2.15090806_F264
-rw-r--r--. 1
ldm ldm
64211460 Sep
8 05:53
/data/grib2/gblav2.15090806_F276
-rw-r--r--. 1
ldm ldm
64117374 Sep
8 05:54
/data/grib2/gblav2.15090806_F288
-rw-r--r--. 1
ldm ldm
64123032 Sep
8 05:54
/data/grib2/gblav2.15090806_F300
-rw-r--r--. 1
ldm ldm
64714736 Sep
8 05:56
/data/grib2/gblav2.15090806_F312
-rw-r--r--. 1
ldm ldm
65052210 Sep
8 05:56
/data/grib2/gblav2.15090806_F324
-rw-r--r--. 1
ldm ldm
65123631 Sep
8 05:57
/data/grib2/gblav2.15090806_F336
-rw-r--r--. 1
ldm ldm
64903451 Sep
8 05:58
/data/grib2/gblav2.15090806_F348
-rw-r--r--. 1
ldm ldm
64423290 Sep
8 05:59
/data/grib2/gblav2.15090806_F360
-rw-r--r--. 1
ldm ldm
64365594 Sep
8 05:59
/data/grib2/gblav2.15090806_F372
-rw-r--r--. 1
ldm ldm
63855749 Sep
8 06:07
/data/grib2/gblav2.15090806_F384
0.25 deg
-rw-r--r--. 1
ldm ldm
180752345 Sep
7 22:25
/data/grib2/gblav0p25.15090800_Fanl
-rw-r--r--. 1
ldm ldm
197882387 Sep
7 22:26
/data/grib2/gblav0p25.15090800_F000
-rw-r--r--. 1
ldm ldm
217304897 Sep
7 22:27
/data/grib2/gblav0p25.15090800_F003
-rw-r--r--. 1
ldm ldm
221447144 Sep
7 22:30
/data/grib2/gblav0p25.15090800_F006
-rw-r--r--. 1
ldm ldm
221383770 Sep
7 22:32
/data/grib2/gblav0p25.15090800_F009
-rw-r--r--. 1
ldm ldm
222748480 Sep
7 22:34
/data/grib2/gblav0p25.15090800_F012
-rw-r--r--. 1
ldm ldm
224209489 Sep
7 22:35
/data/grib2/gblav0p25.15090800_F015
-rw-r--r--. 1
ldm ldm
226360332 Sep
7 22:36
/data/grib2/gblav0p25.15090800_F018
-rw-r--r--. 1
ldm ldm
225185199 Sep
7 22:37
/data/grib2/gblav0p25.15090800_F021
-rw-r--r--. 1
ldm ldm
226720828 Sep
7 22:38
/data/grib2/gblav0p25.15090800_F024
-rw-r--r--. 1
ldm ldm
224211990 Sep
7 22:38
/data/grib2/gblav0p25.15090800_F027
-rw-r--r--. 1
ldm ldm
226623368 Sep
7 22:40
/data/grib2/gblav0p25.15090800_F030
-rw-r--r--. 1
ldm ldm
224601041 Sep
7 22:41
/data/grib2/gblav0p25.15090800_F033
-rw-r--r--. 1
ldm ldm
225696377 Sep
7 22:41
/data/grib2/gblav0p25.15090800_F036
-rw-r--r--. 1
ldm ldm
224803488 Sep
7 22:43
/data/grib2/gblav0p25.15090800_F039
-rw-r--r--. 1
ldm ldm
225463303 Sep
7 22:44
/data/grib2/gblav0p25.15090800_F042
-rw-r--r--. 1
ldm ldm
224172234 Sep
7 22:44
/data/grib2/gblav0p25.15090800_F045
-rw-r--r--. 1
ldm ldm
225750651 Sep
7 22:45
/data/grib2/gblav0p25.15090800_F048
-rw-r--r--. 1
ldm ldm
224513834 Sep
7 22:46
/data/grib2/gblav0p25.15090800_F051
-rw-r--r--. 1
ldm ldm
225871134 Sep
7 22:47
/data/grib2/gblav0p25.15090800_F054
-rw-r--r--. 1
ldm ldm
224871484 Sep
7 22:48
/data/grib2/gblav0p25.15090800_F057
-rw-r--r--. 1
ldm ldm
225954437 Sep
7 22:50
/data/grib2/gblav0p25.15090800_F060
-rw-r--r--. 1
ldm ldm
225600052 Sep
7 22:50
/data/grib2/gblav0p25.15090800_F063
-rw-r--r--. 1
ldm ldm
225672348 Sep
7 22:51
/data/grib2/gblav0p25.15090800_F066
-rw-r--r--. 1
ldm ldm
225064451 Sep
7 22:52
/data/grib2/gblav0p25.15090800_F069
-rw-r--r--. 1
ldm ldm
225318101 Sep
7 22:54
/data/grib2/gblav0p25.15090800_F072
-rw-r--r--. 1
ldm ldm
225303961 Sep
7 22:57
/data/grib2/gblav0p25.15090800_F075
-rw-r--r--. 1
ldm ldm
226805528 Sep
7 23:03
/data/grib2/gblav0p25.15090800_F078
-rw-r--r--. 1
ldm ldm
226187062 Sep
7 23:08
/data/grib2/gblav0p25.15090800_F081
-rw-r--r--. 1
ldm ldm
227313364 Sep
7 23:09
/data/grib2/gblav0p25.15090800_F084
-rw-r--r--. 1
ldm ldm
226221831 Sep
7 23:15
/data/grib2/gblav0p25.15090800_F087
-rw-r--r--. 1
ldm ldm
197951753 Sep
8 04:25
/data/grib2/gblav0p25.15090806_F000
-rw-r--r--. 1
ldm ldm
181438882 Sep
8 04:26
/data/grib2/gblav0p25.15090806_Fanl
-rw-r--r--. 1
ldm ldm
218273142 Sep
8 04:28
/data/grib2/gblav0p25.15090806_F003
-rw-r--r--. 1
ldm ldm
222180270 Sep
8 04:30
/data/grib2/gblav0p25.15090806_F006
-rw-r--r--. 1
ldm ldm
222627637 Sep
8 04:32
/data/grib2/gblav0p25.15090806_F009
-rw-r--r--. 1
ldm ldm
225440960 Sep
8 04:34
/data/grib2/gblav0p25.15090806_F012
-rw-r--r--. 1
ldm ldm
224877734 Sep
8 04:35
/data/grib2/gblav0p25.15090806_F015
-rw-r--r--. 1
ldm ldm
226700650 Sep
8 04:36
/data/grib2/gblav0p25.15090806_F018
-rw-r--r--. 1
ldm ldm
225325799 Sep
8 04:37
/data/grib2/gblav0p25.15090806_F021
-rw-r--r--. 1
ldm ldm
226163438 Sep
8 04:38
/data/grib2/gblav0p25.15090806_F024
-rw-r--r--. 1
ldm ldm
225234793 Sep
8 04:39
/data/grib2/gblav0p25.15090806_F027
-rw-r--r--. 1
ldm ldm
224315172 Sep
8 04:40
/data/grib2/gblav0p25.15090806_F030
-rw-r--r--. 1
ldm ldm
223485303 Sep
8 04:41
/data/grib2/gblav0p25.15090806_F033
-rw-r--r--. 1
ldm ldm
226101395 Sep
8 04:42
/data/grib2/gblav0p25.15090806_F036
-rw-r--r--. 1
ldm ldm
222880336 Sep
8 04:43
/data/grib2/gblav0p25.15090806_F039
-rw-r--r--. 1
ldm ldm
225276943 Sep
8 04:44
/data/grib2/gblav0p25.15090806_F042
-rw-r--r--. 1
ldm ldm
225167793 Sep
8 04:45
/data/grib2/gblav0p25.15090806_F045
-rw-r--r--. 1
ldm ldm
225771493 Sep
8 04:46
/data/grib2/gblav0p25.15090806_F048
-rw-r--r--. 1
ldm ldm
225066649 Sep
8 04:47
/data/grib2/gblav0p25.15090806_F051
-rw-r--r--. 1
ldm ldm
225905191 Sep
8 04:47
/data/grib2/gblav0p25.15090806_F054
-rw-r--r--. 1
ldm ldm
225706912 Sep
8 04:48
/data/grib2/gblav0p25.15090806_F057
-rw-r--r--. 1
ldm ldm
225891555 Sep
8 04:49
/data/grib2/gblav0p25.15090806_F060
-rw-r--r--. 1
ldm ldm
225723607 Sep
8 04:50
/data/grib2/gblav0p25.15090806_F063
-rw-r--r--. 1
ldm ldm
227329359 Sep
8 04:51
/data/grib2/gblav0p25.15090806_F066
-rw-r--r--. 1
ldm ldm
226381130 Sep
8 04:52
/data/grib2/gblav0p25.15090806_F069
-rw-r--r--. 1
ldm ldm
227000926 Sep
8 04:53
/data/grib2/gblav0p25.15090806_F072
-rw-r--r--. 1
ldm ldm
225483067 Sep
8 04:54
/data/grib2/gblav0p25.15090806_F075
-rw-r--r--. 1
ldm ldm
227295269 Sep
8 04:55
/data/grib2/gblav0p25.15090806_F078
-rw-r--r--. 1
ldm ldm
226316715 Sep
8 04:55
/data/grib2/gblav0p25.15090806_F081
-rw-r--r--. 1
ldm ldm
227632093 Sep
8 04:57
/data/grib2/gblav0p25.15090806_F084
-rw-r--r--. 1
ldm ldm
226447758 Sep
8 05:01
/data/grib2/gblav0p25.15090806_F087
Pete
On 09/08/2015
08:43 AM,
Tyle, Kevin R
wrote:
Hi
Pete,
We’ve
had incomplete
GFS the last
two runs (00
and 06 UTC
today) … how
did things
look on your
end?
Thanks,
Kevin
_____________________________________________
Kevin Tyle,
Systems
Administrator
Dept. of
Atmospheric
&
Environmental
Sciences
University at
Albany
Earth Science
235, 1400
Washington
Avenue
Albany, NY
12222
Email:
ktyle@xxxxxxxxxx
Phone: 518-442-4578
_____________________________________________
--
Pete Pokrandt -
Systems Programmer
UW-Madison Dept of
Atmospheric and Oceanic Sciences
608-262-3086 -
poker@xxxxxxxxxxxx
_______________________________________________
conduit
mailing list
conduit@xxxxxxxxxxxxxxxx
For list
information or
to
unsubscribe,
visit:
http://www.unidata.ucar.edu/mailing_lists/
--
Arthur A.
Person
Research
Assistant,
System
Administrator
Penn State
Department of
Meteorology
email: aap1@xxxxxxx,
phone: 814-863-1563
_______________________________________________
conduit
mailing list
conduit@xxxxxxxxxxxxxxxx
For list
information or
to
unsubscribe,
visit:
http://www.unidata.ucar.edu/mailing_lists/
--
Arthur A.
Person
Research
Assistant,
System
Administrator
Penn State
Department of
Meteorology
email: aap1@xxxxxxx,
phone: 814-863-1563
_______________________________________________
conduit
mailing list
conduit@xxxxxxxxxxxxxxxx
For list
information or
to
unsubscribe,
visit:
http://www.unidata.ucar.edu/mailing_lists/
--
Arthur A.
Person
Research
Assistant,
System
Administrator
Penn State
Department of
Meteorology
email: aap1@xxxxxxx,
phone: 814-863-1563
--
Arthur A.
Person
Research
Assistant,
System
Administrator
Penn State
Department of
Meteorology
email: aap1@xxxxxxx,
phone: 814-863-1563
--
Arthur A.
Person
Research
Assistant,
System
Administrator
Penn State
Department of
Meteorology
email: aap1@xxxxxxx,
phone: 814-863-1563
_______________________________________________
conduit
mailing list
conduit@xxxxxxxxxxxxxxxx
For list
information or
to
unsubscribe,
visit:
http://www.unidata.ucar.edu/mailing_lists/
_______________________________________________
conduit mailing list
conduit@xxxxxxxxxxxxxxxx
For list information or to
unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
--
Arthur A. Person
Research Assistant, System
Administrator
Penn State Department of Meteorology
email: aap1@xxxxxxx,
phone: 814-863-1563
--
Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email: aap1@xxxxxxx, phone:
814-863-1563
_______________________________________________
conduit mailing list
conduit@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow