Pauline Mak wrote:
I'm looking at slide 6 at the moment and have a question... How does it
deal with datasets that are continuely updated? For example, we updated
the Argo dataset on a weekly basis through rsync. The ncML files will
need to be updated. Furthermore, this will introduce a lag between the
content of the file and the ncML file. I'm more in favour of generating
the data dependent figures from the file itself... (bad for performance,
but at least the metadata will always be relevant.)
The approach we take on our server has two parts:
1) in the NcML aggregation you can set recheckEvery attribute (if you
are using scan) which will automatically check for new files being added.
2) in the THREDDS timeCoverage metadata element, one can use relative
time such as:
<timeCoverage>
<start>1999-11-16T12:00:00</start>
<end>present</end>
</timeCoverage>
im not sure if this works with your use of generated NcML.