Hi,
I have a custom IOSP that I am trying to use to aggregate datasets. It
works when I use the "location" attribute with local files, but my IOSP
can also read data via URLs, bypassing the "location" (which I set to
/dev/null) and loading my data (e.g. from an ftp URL) into a Variable cache.
I have a trivial example of trying to use joinExisting with two files
with two variables: time and value. In the end, the result successfully
aggregates the values, but the times from the first file are repeated
for the second half of the aggregation.
My first point of confusion is that open gets called on my IOSP THREE
times. (If I don't call "initNetcdfFileCache" it calls open 9 times.) It
appears to be random whether the first call reads the first or second
file, then the same file gets read again. However, even the working test
with "location" also has the third read. That redundant read seems
indicative of bigger problems.
Somewhere in the ~20 hops between my read and the IOSP getting invoked
(AggregationExisting.buildNetcdfDataset), I see something about a
"typical dataset". That seems to be the source of the extra read, and
where I gave up so my head wouldn't explode. But I am no closer to
understanding the incorrect time values.
Am I barking up the right tree? Should I expect my custom IOSP to work
with aggregation?
Thanks,
Doug
P.S. I'm using 4.2.