Re: netCDF time representation

>The requirement to store time to more than 53 bits of precision is separate
>from data density over that time span.

I'm not sure what your point is, but I'll assume it has something to
do with the following paragraph.

>For our purposes where in principle a double precision value is adequate
>for storing time, the only problem is the reliable conversion to
>and from integer and double precision values on various architectures
>without losing a least significant bit.  For purposes which require greater
>precision, no alternative in the present scheme exists.

Because the double-precision variable would contain integral values,
(e.g. time in milliseconds) there would be no loss of precision.  Unless
I'm mistaken, this is guaranteed by IEEE arithmetic.

I'm not aware of any architecture on which this would be a problem --
even the non-IEEE ones.  The above works on a Sun 3, Sun Sparc, MIPS
chip, VAX, RS-6000, Cray, IBM PC, and PS/2.  Is anyone aware of a
platform on which it doesn't work?

>If performance considerations are paramount, netCDF is already a non-optimal
>solution.  My convention has no impact on those who do not wish to write
>applications that work with base variables.

If this were adopted as a standard convention, then it would seem that
anyone wishing to write a generic program or wanting to use their
program on a foreign dataset would be required to handle base
variables.  (A "base variable" is what I have been calling a
"multi-component scalar".  I'll defer to the originator of the concept
and use the term "base variable" henceforth.)

If your proposal is *not* intended as a standard convention, then we
are still left with the questions of how to represent extended-range,
high-resolution scalars in netCDF objects and how to automatically
handle them in generic programs.  Even if we wrote conversion routines
so that generic netCDF programs (which were written assuming scalar
values) could handle datasets containing base variables, what would we
convert the base variables to?

>...  Such
>applications already must have separate logic paths in dealing with
>the present set of primitives.  This is just an additional logic path.

Agreed.  Unfortunately, it is an additional logic path that would have
to be implemented for every single data variable a generic program
might handle (and every time it handled it).  This might be reasonable
(though non-trivial) in C++, but I believe it is too much to ask for
programs written in C and Fortran.  Having written a generic,
polymorphic netCDF program in C, I have no desire, whatsoever, to
increase the number of datatypes it must handle.

EDITORIAL-MODE ON

To forestall complaints about wasted bandwidth from disinterested
readers, let me express my belief that these matters are important.
There is so much interest in the upcoming suite of generic netCDF
programs that it behooves us to get them right.  (After all, you'll
probably end up using them.)

If you tire of this discussion, then, by all means, delete before
reading (or, better yet, obtain procmail(1) for automatic filtering).

EDITORIAL-MODE OFF

Regards,
Steve Emmerson           <steve@xxxxxxxxxxxxxxxx>


  • 1992 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: