[netcdfgroup] netCDF Operators NCO version 4.3.6 are ready

The netCDF Operators NCO version 4.3.6 are ready.

http://nco.sf.net (Homepage)
http://dust.ess.uci.edu/nco (Homepage "mirror")

The current release fixes a few problems (nice bugs, not nasty
ones) and makes NCO more precise and interoperable with CF checkers.
Versions 4.3.2-4.3.5 suffer from a number of bugs now fixed.
Anyone using those versions is encouraged to upgrade. Sigh :)

The big change is default promotion of single- to double-precision math.
Rounding errors can accumulate to worrisome levels during arithmetic
performed on large (>~10,000) arrays of single-precision floats.
The --dbl switch introduced in 4.3.5 is NCO's new default.
The manual contains a detailed discussion of the trade-offs.
NCO will now be slower and take more memory on such calculations.
But now it will always agree with most other fat and slow tools.
Power users can still opt for speed and single-precision with --flt.

Multiple record dimensions in one variable have some interesting
properties. For now, we prefer to avoid them :) Thus we've made it
harder to inadvertently produce them until we revisit the issue later.
ncecat and ncpdq will no longer add to the number of record dimensions
of an existing record variable. Usually that's what you want.
Trust me. I'm from the government :)

Work on NCO 4.3.7 is underway and includes improved netCDF4 support
for more NCO operators (ncatted, ncrename), and improved support for
HDF4 files.

Enjoy,
Charlie

"New stuff" in 4.3.6 summary (full details always in ChangeLog):

NEW FEATURES:

A. New default: Forced conversion of float->double.
   Until 4.3.4 NCO never converted single-precision reals (i.e.,
   floats) to double-precision reals (i.e., doubles) prior to
   arithmetic unless other doubles were involved. In 4.3.5 NCO
   implemented an option, --dbl, to manually force such float->double
   promotion. As of 4.3.6 the promotion employed by --dbl is the
   default and it need not be explicitly requested. The old behavior
   of not promoting floats must now be manually selected with --flt.
   These commands are now identical:
   ncea = ncea --dbl
   ncra = ncra --dbl
   ncwa = ncwa --dbl
   The old behvior of these operators is obtained by
   ncea --flt
   ncra --flt
   ncwa --flt
   Other operators are unaffected.
   The new behavior preserves more precision.
   The old behavior is faster and uses less RAM.
   The manual contains a detailed discussion of the issues:
   http://nco.sf.net/nco.html#dbl

A. ncdismember flattens and can CF-check hierarchical files.
   Some users must flatten hierarchical files to use with tools, such
   as CF-compliance checkers, that are not "group-aware". The "brute
   force" method is slow: ncdump (or h5dump), strip out group info,
   rebuild each group as flat netCDF3, pass to a checker. ncdismember
   automates this procedure. It calls ncks to extract each leaf-group
   into a netCDF3 file, and calls cfchecker when invoked with an
   optional third argument of 'cf'.
   http://nco.sf.net/nco.html#dismember

BUG FIXES:

A. Improved ncwa scope for weights, masks

B. Fixed ncea 4.3.5 bug that caused a core dump on some files.

C. Fixed ncwa 4.3.5 bug that sometime changed record dimensions
   to fixed dimensions on dimensions that were not averaged.

KNOWN ISSUES NOT YET FIXED:

   This section of ANNOUNCE reports and reminds users of the
   existence and severity of known, not yet fixed, problems.
   These problems occur with NCO 4.3.6 built/tested with netCDF
   4.3.1-rc3 snapshot 20130827 on top of HDF5 hdf5-1.8.9 with these
methods:

   cd ~/nco;./configure --enable-netcdf4  # Configure mechanism -or-
   cd ~/nco/bld;make dir;make all;make ncap2 # Old Makefile mechanism

A. NOT YET FIXED
   netCDF4 library fails when renaming dimension and variable using
   that dimension, in either order. Works fine with netCDF3.
   Problem with netCDF4 library implementation.

   Demonstration:
   ncks -O -4 -v lat_T42 ~/nco/data/in.nc ~/foo.nc
   ncrename -O -D 2 -d lat_T42,lat -v lat_T42,lat ~/foo.nc ~/foo2.nc #
Breaks with "NetCDF: HDF error"
   ncks -m ~/foo.nc

   20130724: Verified problem still exists
   Bug report filed: netCDF #YQN-334036: problem renaming dimension and
coordinate in netCDF4 file

B. NOT YET FIXED (would require DAP protocol change?)
   Unable to retrieve contents of variables including period '.' in name
   Periods are legal characters in netCDF variable names.
   Metadata are returned successfully, data are not.
   DAP non-transparency: Works locally, fails through DAP server.

   Demonstration:
   ncks -O -C -D 3 -v var_nm.dot -p
http://thredds-test.ucar.edu/thredds/dodsC/testdods in.nc # Fails to
find variable

   20130724: Verified problem still exists.
   Stopped testing because inclusion of var_nm.dot broke all test scripts.
   NB: Hard to fix since DAP interprets '.' as structure delimiter in
HTTP query string.

   Bug report filed: https://www.unidata.ucar.edu/jira/browse/NCF-47

C. NOT YET FIXED (would require DAP protocol change)
   Correctly read scalar characters over DAP.
   DAP non-transparency: Works locally, fails through DAP server.
   Problem, IMHO, is with DAP definition/protocol

   Demonstration:
   ncks -O -D 1 -H -C -m --md5_dgs -v md5_a -p
http://thredds-test.ucar.edu/thredds/dodsC/testdods in.nc

   20120801: Verified problem still exists
   Bug report not filed
   Cause: DAP translates scalar characters into 64-element (this
   dimension is user-configurable, but still...), NUL-terminated
   strings so MD5 agreement fails

D. NOT YET FIXED (NCO problem)
   Correctly read arrays of NC_STRING with embedded delimiters in
ncatted arguments

   Demonstration:
   ncatted -D 5 -O -a
new_string_att,att_var,c,sng,"list","of","str,ings" ~/nco/data/in_4.nc
~/foo.nc

   20130724: Verified problem still exists
   TODO nco1102
   Cause: NCO parsing of ncatted arguments is not yet sophisticated
   enough to handle arrays of NC_STRINGS with embedded delimiters.

"Sticky" reminders:

A. Pre-built, up-to-date Debian Sid & Ubuntu packages:
   http://nco.sf.net#debian

B. Pre-built Fedora and CentOS RPMs:
   http://nco.sf.net#rpm

C. Pre-built Windows (native) and Cygwin binaries:
   http://nco.sf.net#windows

D. Pre-built AIX binaries:
   http://nco.sf.net#aix

E. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)?
   SWAMP efficiently schedules/executes NCO scripts on remote servers:

   http://swamp.googlecode.com

   SWAMP can work command-line operator analysis scripts besides NCO.
   If you must transfer lots of data from a server to your client
   before you analyze it, then SWAMP will likely speed things up.

F. NCO support for netCDF4 features is tracked at

   http://nco.sf.net/nco.html#nco4

   NCO supports netCDF4 atomic data types, compression, chunking, and
groups.

G. Reminder that ncks, ncecat, ncbo, ncflint, and ncpdq work on many common
   HDF5 datasets, e.g.,
   NASA AURA HIRDLS HDF-EOS5
   NASA ICESat GLAS HDF5
   NASA SBUV HDF5...

-- 
Charlie Zender, Earth System Sci. & Computer Sci.
University of California, Irvine 949-891-2429 )'(



  • 2013 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: