The netCDF Operators NCO version 4.3.1 are ready.
http://nco.sf.net (Homepage)
http://dust.ess.uci.edu/nco (Homepage "mirror")
This release solidifies group support in ncbo and makes
group-related features consistent across ncbo, ncecat, and ncks.
These three operators fully support group hierarchies.
Work on NCO 4.3.2 is underway and includes improved netCDF4 support
for more NCO operators, and better support for Windows builds
(enabling DAP and UDUnits by default).
Enjoy,
Charlie
"New stuff" in 4.3.1 summary (full details always in ChangeLog):
NEW FEATURES:
A. ncbo now supports Group Path Editing (GPE) and list unions.
These are the same options already supported by ncks and ncecat.
They are invoked with -G and with --union, respectively.
ncbo -g g1 -v v1 --union -G dude -O -p ~/nco/data in_grp.nc in_grp.nc
~/foo.nc
http://nco.sf.net/nco.html#gpe
http://nco.sf.net/nco.html#union
B. ncbo now always works when input files are interchanged, i.e.,
(ncbo fl_1.nc fl_2.nc) = -(ncbo fl_2.nc fl_1.nc).
Formerly, ncbo broadcast variables in fl_2 to match the rank of
variables in fl_1 when necessary, but would fail rather than
broadcast in the opposite direction. Hence one could subtract zonal
averages, e.g., from full fields but not the reverse. Now ncbo
broadcasts variables both ways. If v1 is larger rank in fl_1 than
in fl_2 then both these work:
ncbo -v v1 fl_1.nc fl_2.nc out.nc # Now works too!
ncbo -v v1 fl_2.nc fl_1.nc out.nc # Always worked
C. ncecat now supports Record AGgregation (RAG) mode for netCDF4.
Most people have always used RAG mode, the default for ncecat.
RAG mode glues files together with a new record dimension.
Group AGgregation (GAG) mode, by contrast, is specified with --gag,
andwas introduced in NCO 4.2.2 to glue files together by placing
them in distinct groups in a netCDF4 output file.
GAG mode always worked for both netCDF3 and netCDF4 input files,
but RAG mode only worked for netCDF3/classic input files.
Now RAG mode works for netCDF4 files too. And it is the default.
Existing record dimensions in input netCDF4 files are preserved,
and a single glue record dimension is placed in the root group.
ncecat -O in_grp.nc in_grp.nc foo.nc
http://nco.sf.net/nco.html#ncecat
http://nco.sf.net/nco.html#rag
http://nco.sf.net/nco.html#gag
BUG FIXES:
A. Fixed ncbo bug introduced in version 4.3.0 where some special
exceptions (a subset of http://nco.sf.net/nco.html#prc_xcp)
to variable list processing were inadvertently always turned off.
This may cause some grid-related variables (e.g., ntrm and nbdate)
and some non-grid variables (e.g., ORO and gw) to be subtracted
even when that makes no physical sense in the CF context.
http://nco.sf.net#bug_ncbo_ccm_ccsm_cf
B. Fixed ncatted bug where leaving the attribute field blank in
attribute edit mode structures could trigger a segfault on netCDF4
files. Bug identified and fix provided by Etienne Tourigny.
http://nco.sf.net#bug_ncatted_strcmp
C. Fixed ncrcat bug that falsely warned of non-monotonicity in input
files with multiple variables treated as record coordinates.
D. Fixed ncbo bug where history attribute was not appended.
KNOWN BUGS NOT YET FIXED:
This section of ANNOUNCE reports and reminds users of the
existence and severity of known, not yet fixed, problems.
These problems occur with NCO 4.3.1 built/tested with netCDF
4.3.0-rc4 on top of HDF5 hdf5-1.8.9 with these methods:
cd ~/nco;./configure --enable-netcdf4 # Configure mechanism -or-
cd ~/nco/bld;make dir;make all;make ncap2 # Old Makefile mechanism
A. NOT YET FIXED
Correctly read netCDF4 input over DAP, write netCDF4 output, then
read resulting file.
Replacing netCDF4 with netCDF3 in either location of preceding
sentence leads to success.
DAP non-transparency: Works locally, fails through DAP server.
Unclear whether resulting file is "legal" because of dimension ID
ordering assumptions.
Demonstration:
ncks -4 -O -v three_dmn_rec_var
http://motherlode.ucar.edu:8080/thredds/dodsC/testdods/in_4.nc ~/foo.nc
ncks ~/foo.nc # breaks with "NetCDF: Invalid dimension ID or name"
20120731: Unable to verify since in_4.nc no longer accessible on
Unidata DAP server
Bug report filed: netCDF #QUN-641037: dimension ID ordering assumptions
B. NOT YET FIXED
netCDF4 library fails when renaming dimension and variable using
that dimension, in either order. Works fine with netCDF3.
Problem with netCDF4 library implementation.
Demonstration:
ncks -O -4 -v lat_T42 ~/nco/data/in.nc ~/foo.nc
ncrename -O -D 2 -d lat_T42,lat -v lat_T42,lat ~/foo.nc ~/foo2.nc #
Breaks with "NetCDF: HDF error"
ncks -m ~/foo.nc
20130319: Verified problem still exists
Bug report filed: netCDF #YQN-334036: problem renaming dimension and
coordinate in netCDF4 file
C. NOT YET FIXED (requires change to DAP protocol)
Unable to retrieve contents of variables including period '.' in name
Periods are legal characters in netCDF variable names.
Metadata is returned successfully, data is not.
DAP non-transparency: Works locally, fails through DAP server.
Demonstration:
ncks -O -C -D 3 -v var_nm.dot -p
http://motherlode.ucar.edu:8080/thredds/dodsC/testdods in.nc # Fails to
find variable
20120731: Verified problem still exists.
Stopped testing because inclusion of var_nm.dot broke all test scripts.
NB: Hard to fix since DAP interprets '.' as structure delimiter in
HTTP query string.
Bug report filed: https://www.unidata.ucar.edu/jira/browse/NCF-47
D. NOT YET FIXED (requires change to DAP protocol)
Correctly read scalar characters over DAP.
DAP non-transparency: Works locally, fails through DAP server.
Problem, IMHO, is with DAP definition/protocol
Demonstration:
ncks -O -D 1 -H -C -m --md5 -v md5_a -p
http://motherlode.ucar.edu:8080/thredds/dodsC/testdods in.nc
20120801: Verified problem still exists
Bug report not filed
Cause: DAP translates scalar characters into 64-element,
NUL-terminated strings so MD5 agreement fails
E. NOT YET FIXED
netCDF4 library can create dimensions with duplicate IDs if
dimension with same name defined in a group and its ancestor group.
Problem with HDF5 or with netCDF4 library implementation?
Demonstration:
ncks -O -v two_dmn_rec_var ~/nco/data/in_grp.nc ~/foo.nc
ncks -m ~/foo.nc
20130328: Verified problem still exists
(This may be fixed in the netCDF 4.3.0 release...will check again then)
Bug report filed 20120312: netCDF #SHH-257980: Re: [netcdfgroup]
Dimensions IDs
F. NOT YET FIXED
ncdump is unable to dump the NCO test file in_grp.nc
This is not an NCO problem per se, but may indicate a deeper netCDF
bug that affects NCO.
Unclear whether problem with ncdump, HDF5, or with netCDF4 library
implementation.
Demonstration:
ncdump ~/nco/data/in_grp.nc # breaks with "NetCDF: Invalid argument
Location: file dumplib.c; line 970"
20130319: Verified
Bug report not yet filed
G. NOT YET FIXED
Auxiliary coordinates do not work in ncks versions 4.2.0-4.3.1.
This is an NCO problem discovered just before release.
Demonstration:
ncks -O -X 0.,180.,-30.,30. -v gds_3dvar ~/nco/data/in.nc ~/foo.nc
20130428: Verified
This is our top priority to fix in version 4.3.2.
The workaround is to use NCO 4.1.9 or earlier.
"Sticky" reminders:
A. Pre-built, up-to-date Debian Sid & Ubuntu packages:
http://nco.sf.net#debian
B. Pre-built Fedora and CentOS RPMs:
http://nco.sf.net#rpm
C. Pre-built Windows (native) and Cygwin binaries:
http://nco.sf.net#windows
D. Pre-built AIX binaries:
http://nco.sf.net#aix
E. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)?
SWAMP efficiently schedules/executes NCO scripts on remote servers:
http://swamp.googlecode.com
SWAMP can work command-line operator analysis scripts besides NCO.
If you must transfer lots of data from a server to your client
before you analyze it, then SWAMP will likely speed things up.
F. NCO support for netCDF4 features is tracked at
http://nco.sf.net/nco.html#nco4
NCO supports netCDF4 atomic data types, compression, chunking, and
groups.
G. Reminder that ncks and ncecat work on many common HDF5 datasets, e.g.,
NASA AURA HIRDLS HDF-EOS5
NASA ICESat GLAS HDF5
NASA SBUV HDF5...
--
Charlie Zender, Earth System Sci. & Computer Sci.
University of California, Irvine 949-891-2429 )'(