[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Dealing with large archives



Ethan Davis wrote:

Hi Tennesee,

Tennessee Leeuwenburg wrote:

Secondly :

I am trying to work out how to structure my data by date. I will have a number of data sets (NWP Models) which will get updated daily, or even multiple times per day. Quite quickly I will reach the point where I will have hundreds of data sets published. Even a week's worth of data at 2 per day across 3 sources is 42 data sets.

I have two tasks - one would be to automate the updating of the configuration files so that new data sets get incorporated as they become available, and the other would be structuring the data pages in a sensible way for users to access.


The THREDDS catalog generation tool can automate generation of catalogs but it does not generate aggregation server config files. Actually, it can generate the parts that aren't aggregations, i.e., the plain THREDDS catalogs parts of the config file. I've always wanted to extend it to deal with the aggregation part of the aggServer config but have never gotten around to doing so.

We're currently working on the next release of the THREDDS server. The OPeNDAP netCDF server side of that should be quite a bit easier to configure (e.g., give it a directory and it serves all the files in that directory that match a certain pattern). The configuration for the aggregation part of the server is still up in the air but it will very likely be different from the current configuration syntax. This should get ironed out in the next 3-6 months. In the mean time, you might take a look at the catalog generator (http://www.unidata.ucar.edu/projects/THREDDS/tech/cataloggen/index.html) and see if that helps any.

Do you reckon you could add "all the files in a directory listing from a web server", not just filesytems?

I'll look into the generation tool - somehow I hadn't thought of that yet :) I guess it's just early days...

Cheers,
-T