Greetings,
(Daniel Packman <pack@xxxxxxxxxxxx>, one of the netCDF mailing-list
subscribers, has an alternative proposal for handling time. Since this
subject is of general interest, I decided to respond to his message via
the mailing-list. --Steve)
>Encoding an ascii offset into the "units" attribute is only mildly irritating.
>As you say, you are assuming "Western Calendar notation". I would prefer
>having a standard set of possible numeric binary encodings (either a series
>of nc_short numbers or nc_long) in another attribute such as "origin".
>I believe such a binary standard would allow more straightforward programming
>and would decrease the likelihood of datasets with non-standard and poorly
>formed offsets.
Possibly, but I don't think so. It is easier for me to notice a small
mistake in a formatted string than in a binary number. Also, since
netCDF's would, for the most part, be generated by programs, it seems to
me that both forms would be subject to equal abuse.
Also, a binary convention would only be valid as long as the
agreed-upon origin for time was adhered to. If a netCDF didn't follow
that convention, the user might not notice. Whereas, with a formatted
specification, there is no standard time origin: each netCDF is free to
choose an appropriate one, and the choice is documented.
>But, the fatal flaw in all this is the terribly limited singly dimensioned
>variable allowed for time. Those of us who wish to store milliseconds
>from an epoch are limited to about a fifty day dataset. From a strictly
>mathematical standpoint we will never need the millisecond resolution for
>long datasets. From a dataset standpoint, we need to store the bits that
>come in to do absolute time comparisons.
Do you need millisecond resolution for more than fifty days, or were
milliseconds chosen because they were convenient for some other reason.
>With the current set of primitives, we could store our time variable as
>a double. If we strictly interpreted it as a double, then we still could
>not guarantee exact millisecond accuracy when converting to/from double
>from/to integers [it would *probably* work]. If we had an explicit 64
>bit integer type, we could use this as straight milliseconds from an
>offset but manipulations in the context of 32 bit machines would be awkward.
>Using two nc_long values fits the bill for us and at least on other site
>which is using the same exact convention as ours (Julian Day minus 12 hours,
>millisecond of day). This line of reasoning leads directly to the idea
>of base arithmetic and a series of integer variables.
Unfortunately, without a sample implementation of a netCDF library
using base arithmetic and base vectors, so that all the ramifications
can be seen, the proposal is unlikely to gain wide acceptance.
Regards,
Steve Emmerson <steve@xxxxxxxxxxxxxxxx>