Re: [python-users] Siphon example for reanalysis

On Fri, May 12, 2017 at 7:31 AM, Scott Collis <scollis.acrf@xxxxxxxxx>
wrote:

> Hey Unidata Pythonistas
>
> Before I do my own work I was wondering if some one had an example using
> Siphon extracting reanalysis data. Specifically I want to extract a column
> of values at a single lat lon for ~20 years.
>
>
Scott,

Here's stuff I had laying around for accessing ESRL's NARR archive:

from calendar import monthrange
from datetime import datetime
import itertools

from netCDF4 import date2num, num2date
import numpy as np
from pyproj import Proj
from siphon.catalog import TDSCatalog
import xarray as xr

base_catalog = '
http://www.esrl.noaa.gov/psd/thredds/catalog/Datasets/NARR/pressure/catalog.xml
'
main_cat = TDSCatalog(base_catalog)

# Spatial location to find stuff
points = [(35.25 , -97.1), (40, -105)]
lats, lons = map(np.array, zip(*points))
fields = ['air', 'hgt', 'shum', 'uwnd', 'vwnd']

for year, month, var in itertools.product((2015, 2014, 2015), range(1, 13),
fields):
    # Figure out what to grab
    dsname = '{}.{:4d}{:02d}.nc'.format(var, year, month)
    print('{}: downloading...'.format(dsname), end='')

    # Grab it using opendap--manually convert to CF to work around the
    # fact that missing_value and _FillValue differ
    ds = xr.open_dataset(main_cat.datasets[dsname].access_urls['OPENDAP'],
                         decode_cf=False)
    ds = xr.conventions.decode_cf(ds, mask_and_scale=False)

    # Grab the projection variable and convert our points to that
    # Probably not strictly necessary
    proj_var = ds[ds[var].grid_mapping]
    proj = Proj(proj='lcc', lat_0=proj_var.latitude_of_projection_origin,
            lon_0=proj_var.longitude_of_central_meridian,
            lat_1=proj_var.standard_parallel[0],
lat_2=proj_var.standard_parallel[1],
            x_0=proj_var.false_easting, y_0=proj_var.false_northing,
ellps='sphere')
    x, y = proj(lons, lats)


    # Subset the data
    print('subsetting...', end='')
    pt_ds = ds.sel_points(x=x, y=y, method='nearest')
    print('saving...', end='')

    # Save to disk
    pt_ds.to_netcdf(dsname)
    print('done.')

I found it best to use xarray to do the subsetting I want and then save a
copy of that data locally, because it does take *awhile*. The problem is
that nobody has good collections set up on their THREDDS servers, so you
have to do the aggregation on the client side.

Once you have all of these netCDF files downloaded and saved, you can use
xarray's open_mfdataset to group them.

Ryan

-- 
Ryan May, Ph.D.
Software Engineer
UCAR/Unidata
Boulder, CO
  • 2017 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the python-users archives: