[netcdfgroup] netCDF operators NCO version 4.2.2 are ready

The netCDF operators NCO version 4.2.2 are ready.

http://nco.sf.net (Homepage)
http://dust.ess.uci.edu/nco (Homepage "mirror")

This release finishes some older work and introduces new functionality
supported by our NASA ACCESS grant. Old stuff first:
Optional specification of header pads in all operators and of negative
hyperslab indices (to count backwards from the end) are now supported
on all operators for which they make sense. And ncwa is now present
in the Windows tarball.

This release improves netCDF4/HDF5 support in ncks and ncecat.
ncks now supports printing and subsetting group hierarchies.
Similarly, ncecat now aggregates files into group hierarchies.
They will work as is on most netCDF4 files. They do not yet support
user-defined and compound types, which are still rarely seen.

Let us know how you like or dislike the way we have implemented the
equivalent functionality for netCDF4 that NCO already had for netCDF3.
Your feedback will help us finalize details like the printed output
format for group information and the syntax of switches.
This improved netCDF4 support will eventually come to all of NCO.

Work on NCO 4.2.3 is underway: better netCDF4 support for ncrename
and ncbo operators.

Enjoy,
Charlie

"New stuff" in 4.2.2 summary (full details always in ChangeLog):

NEW FEATURES:

A. ncks and ncecat support netCDF4 groups.
   The key new switches are -g and -G, which do what you would expect:
   extract (or exclude) members of the specified (with -g) groups and
   place them in the specified (with -G) output group. Regular
   expressions work on group names as on variable names, giving users
   precise control over subsetting and hyperslabbing netCDF4 files
   % ncks -g grp1,grp2 in.nc out.nc
   http://nco.sf.net/nco.html#sbs

B. ncecat supports Group AGgregation (GAG).
   GAG means placing each input file into its own output file group.
   By default ncecat still glues input files together with a record
   dimension (whose size is the number of input files). This is called
   Record AGgregation (RAG). RAG can only be used on classic input
   files that have the same shape. GAG must be used for files of
   different shapes or files that contain groups.
   Users may choose GAG instead of RAG with the --gag switch.
   The -G switch for choosing output group names also implies GAG:
   ncecat -O -G ensemble in1.nc in2.nc in3.nc out.nc
   ncecat -O --gag       in1.nc in2.nc in3.nc out.nc
   http://nco.sf.net/nco.html#ncecat

C. ncks now prints missing values as underscores ("_") by default.
   Previously ncks printed the numeric representation, e.g., "1.0e36".
   The new behavior mimics ncdump and looks cleaner:
   % ncks -s '%+5.1f, ' -H -C -v mss_val in.nc
   +73.0, _, +73.0, _,
   To revert to the old behavior, use the new --no_blank switch.
   http://nco.sf.net/nco.html#xmp_ncks

D. ncap2 now understands that negative integers as min or max elements
   of hyperslab specifications indicate offsets from the end.
   The other NCO operators initiated this support as of NCO 4.2.1.
   NCO support for this convention is now complete.
   http://nco.sf.net/nco.html#hyp

E. Our distribution of native-Windows binaries now includes ncwa,
   making it a complete NCO distribution built with Windows Visual Studio.
   Due to limitations of the Windows programming environment however,
   these executables lack some powerful features present in UNIX:
   Regular expressions, globbing, and network capabilities
   Download your self-extracting executables today
   http://nco.sf.net#windows

F. The --hdr_pad option is now supported by all operators.
   This option is useful in preventing re-copying of netCDF3 files
   whose metadata may be changed after the file is first written.
   Formerly it was present only in ncatted, ncks, and ncrename.
   http://nco.sf.net/nco.html#hdr

G. ncap2's bilinear_interp() and bilinear_interp_wrap() now have
   semi-intelligent handling of missing values. Thanks to Henry.
   http://nco.sf.net/nco.html#bln_ntp

BUG FIXES:

A. ncks --fix_rec_dmn could misbehave since 20111130. Under some
   circumstances it would fail to convert the record dimension to a
   fixed dimension, but it would not produce any errors so the user
   was unaware that the operation had failed. This has been fixed.

B. ncks now dies when user attempts --mk_rec_dmn with non-existent
   dimension

C. Fix ncpdq which sometimes exited instead of re-ordering dimensions
   in variables that were record in input, fixed in output, and yet
   themselves lacked an output record dimension, i.e., they became
   fixed (were neutered?) because record dimension had to be fixed to
   satisfy re-ordering of other variables.

D. ncatted stopped adding trailing NULs to string (NC_CHAR) attributes
   in NCO version 4.0.2. This was inadvertent. ncatted once again
   ensures string (NC_CHAR) attributes are NUL-terminated, as per the
   netCDF Best Practices recommendation.

E. Attempts to reach non-existing files behind DAP servers can no
   longer lead to multiple attempts to find the files with wget.
   At most one wget will be attempted.

KNOWN BUGS NOT YET FIXED:

   This section of the ANNOUNCE file reports and reminds users of the
   existence and severity of known, not yet fixed, problems.
   These problems occur with NCO 4.2.2 built/tested with netCDF
   4.2.1 on top of HDF5 hdf5-1.8.9 with these methods:

   cd ~/nco;./configure --enable-netcdf4  # Configure mechanism -or-
   cd ~/nco/bld;make dir;make all;make ncap2 # Old Makefile mechanism

A. NOT YET FIXED
   Correctly read netCDF4 input over DAP, write netCDF4 output, then
read resulting file.
   Replacing netCDF4 with netCDF3 in either location of preceding
sentence leads to success.
   DAP non-transparency: Works locally, fails through DAP server.
   Unclear whether resulting file is "legal" because of dimension ID
ordering assumptions

   Demonstration:
   ncks -4 -O -v three_dmn_rec_var
http://motherlode.ucar.edu:8080/thredds/dodsC/testdods/in_4.nc ~/foo.nc
   ncks ~/foo.nc # breaks with "NetCDF: Invalid dimension ID or name"

   20120731: Unable to verify since in_4.nc no longer accessible
   Bug report filed: netCDF #QUN-641037: dimension ID ordering assumptions

B. NOT YET FIXED
   netCDF4 library fails when renaming dimension and variable using
   that dimension, in either order. Works fine with netCDF3.
   Problem with netCDF4 library implementation.

   Demonstration:
   ncks -O -4 -v lat_T42 ~/nco/data/in.nc ~/foo.nc
   ncrename -O -D 2 -d lat_T42,lat -v lat_T42,lat ~/foo.nc ~/foo2.nc #
Breaks with "NetCDF: HDF error"
   ncks -m ~/foo.nc

   20121025: Verified problem still exists
   Bug report filed: netCDF #YQN-334036: problem renaming dimension and
coordinate in netCDF4 file

C. NOT YET FIXED
   Unable to retrieve contents of variables including period '.' in name
   Metadata is returned successfully, data is not.
   DAP non-transparency: Works locally, fails through DAP server.

   Demonstration:
   ncks -O -C -D 3 -v var_nm.dot -p
http://motherlode.ucar.edu:8080/thredds/dodsC/testdods in.nc # Fails to
find variable

   20120731: Verified problem still exists.
   Stopped testing because inclusion of var_nm.dot broke all test scripts.
   NB: Problem hard to fix since DAP interprets '.' as structure
delimiter in query string of HTTP requests.

   Bug report filed: https://www.unidata.ucar.edu/jira/browse/NCF-47

D. NOT YET FIXED
   Correctly read scalar characters over.
   DAP non-transparency: Works locally, fails through DAP server.
   Problem, IMHO, is with DAP definition/protocol

   Demonstration:
   ncks -O -D 1 -H -C -m --md5 -v md5_a -p
http://motherlode.ucar.edu:8080/thredds/dodsC/testdods in.nc

   20120801: Verified problem still exists
   Bug report not filed
   Cause: DAP translates scalar characters into 64-element,
NUL-terminated strings so MD5 agreement fails

"Sticky" reminders:

A. Pre-built, up-to-date Debian Sid & Ubuntu packages:
   http://nco.sf.net#debian

B. Pre-built Fedora and CentOS RPMs:
   http://nco.sf.net#rpm

C. Pre-built Windows (native) and Cygwin binaries:
   http://nco.sf.net#windows

D. Pre-built AIX binaries:
   http://nco.sf.net#aix

E. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)?
   SWAMP efficiently schedules/executes NCO scripts on remote servers:

   http://swamp.googlecode.com

   SWAMP can work command-line operator analysis scripts besides NCO.
   If you must transfer lots of data from a server to your client
   before you analyze it, then SWAMP will likely speed things up.

F. NCO support for netCDF4 features is tracked at

   http://nco.sf.net/nco.html#nco4

   NCO supports netCDF4 atomic data types, compression, and chunking.

G. Have you seen the NCO logo candidates by Tony Freeman, Rich
   Signell, Rob Hetland, and Andrea Cimatoribus?
   http://nco.sf.net
   Tell us what you think...


-- 
Charlie Zender, Earth System Sci. & Computer Sci.
University of California, Irvine 949-891-2429 )'(



  • 2012 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: