Ethan,
Well, that got me just where NCML aggregation got me: an aggregation
that does not notice new timesteps added to the latest file. It also
created two new time-like variables (time_offset and time_run) and
threw away most of the metadata I had for the time variable. My only
reason for using "Latest" instead letting it default to "Penultimate"
was in the forlorn hope of getting my second value of the attribute
time:actual_range picked up.
I am still getting the same error messages from the XML checker
that TDS runs on its configuration files. I wonder if I'm ever
going to hear back about this difference that makes a difference
between the published XSDs and the online-documentation. Here are
the error messages:
[2012-03-29T19:16:15GMT]
readCatalog(): full path=/usr/share/tomcat5/content/thredds/catalog.xml;
path=catalog.xml
readCatalog(): valid catalog -- ----Catalog Validation version 1.0.01
*** XML parser error (36:14)= cvc-complex-type.2.4.a: Invalid content
was found starting with element 'filter'. One of
'{"http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0":addLatest,
"http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0":addProxies,
"http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0":addDatasetSize,
"http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0":addTimeCoverage}'
is expected.
*** XML parser error (54:50)= cvc-complex-type.2.4.a: Invalid content
was found starting with element 'update'. One of
'{"http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0":fmrcConfig,
"http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0":pointConfig,
"http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2":netcdf}' is
expected.
readCatalog(): full
path=/usr/share/tomcat5/content/thredds/enhancedCatalog.xml;
path=enhancedCatalog.xml
readCatalog(): valid catalog -- ----Catalog Validation version 1.0.01
-Hoop
------ original message --------------
Hi Hoop,
Try adding the following to your featureCollection element
<metadata inherited="true">
<serviceName>all</serviceName>
</metadata>
Also, since your most recent dataset is the one that is changing, you
might want to change protoDataset@choice from "Latest" to "Penultimate"
(which is the default, so you could just drop protoDataset all
together). Also, since data files in your dataset don't age off, it
probably isn't too important which dataset is used but probably better
to not use the one that gets updated. The protoDataset is used to
populate the metadata in the feature dataset.
Since your datasets are a simple timeseries rather than a full-blown
FMRC, you will probably want to add
<fmrcConfig datasetTypes="Best"/>
The fmrcConfig@datasetTypes value tells the featureCollection which
types of FMRC datasets to create. With the value "Best", the forecast
types are left off and only the "Best Time Series" dataset is created.
Not the best dataset name for a simple time series grid (its not just
the best time series, its the only one!) but that's what we have for the
moment. If you want to let people see the underlying files, you could
add "Files" to the fmrcConfig@datasetTypes value.
I'm including the link to the FeatureCollection tutorial [1] which I
forgot to point out in an earlier email when I gave you the link to the
reference docs [2].
Hope that helps,
Ethan
[1]
http://www.unidata.ucar.edu/projects/THREDDS/tech/tds4.2/tutorial/FeatureCollectionsTutorial.html
[2]
http://www.unidata.ucar.edu/projects/THREDDS/tech/tds4.2/reference/collections/FeatureCollections.html
On 3/26/2012 11:13 AM, Hoop wrote:
> Ethan,
>
> The catalog is attached. The filter element is in a datasetScan
> element that we use to generically wrap our NetCDF files, and
> not included within the featureCollection element or any other
> aggregation element. It is meant to generally apply throughout our
> installation.
>
> Sample files may be obtained from:
>
> ftp://ftp.cdc.noaa.gov/Datasets/noaa.oisst.v2.highres/
> The files for this year are updated on a daily basis, barring
> problems.
>
> Let me know what else I can do to help.
>
> -Hoop
>
> On 03/24/12 23:02, thredds-request@xxxxxxxxxxxxxxxx wrote:
>> Send thredds mailing list submissions to
>> thredds@xxxxxxxxxxxxxxxx
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://mailman.unidata.ucar.edu/mailman/listinfo/thredds
>> or, via email, send a message with subject or body 'help' to
>> thredds-request@xxxxxxxxxxxxxxxx
>>
>> You can reach the person managing the list at
>> thredds-owner@xxxxxxxxxxxxxxxx
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of thredds digest..."
>>
>> thredds mailing list
>> thredds@xxxxxxxxxxxxxxxx
>> For list information or to unsubscribe, visit:
>> http://www.unidata.ucar.edu/mailing_lists/
>>
>> Today's Topics:
>> 5. Re: Pb between OpenDap and THREDDS when netcdf file are
>> modifed (Ethan Davis)
>> ----------------------------------------------------------------------
>> Message: 5
>> Date: Sat, 24 Mar 2012 23:02:53 -0600
>> From: Ethan Davis <edavis@xxxxxxxxxxxxxxxx>
>> To: thredds@xxxxxxxxxxxxxxxx
>> Subject: Re: [thredds] Pb between OpenDap and THREDDS when netcdf file
>> are modifed
>> Message-ID: <4F6EA6FD.8080906@xxxxxxxxxxxxxxxx>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> Hi Hoop,
>>
>> Can you send us (or point us to) a few sample files and send us your
>> full catalog?
>>
>> Is the filter you mention below part of your featureCollection element?
>>
>> Ethan
>>
>> On 3/9/2012 1:59 PM, Hoop wrote:
>>> Ethan,
>>>
>>> I don't believe John ever responded as you had requested.
>>> I did my best to try "featureCollection", but I got nowhere.
>>> It doesn't help that the XSDs specify required elements
>>> (for "update" and "filter") that are not mentioned in the
>>> online documentation; the validation process that TDS runs
>>> at start-up informed me of those errors. I have no clue how
>>> to correct them. Here is the attempt I made:
>>>
>>> <featureCollection name="SST_NOAA_OISST_V2_HighResFC" featureType="FMRC"
>>> harvest="true" path="Datasets/aggro/OISSThires.nc">
>>> <collection
>>> spec="/Datasets/noaa.oisst.v2.highres/sst.day.mean.#yyyy#.v2.nc$"
>>> name="SST_OISST_V2_HighResFC" olderThan="15 min" />
>>> <protoDataset choice="Latest" change="0 0 7 * * ? *" />
>>> <update startup="true" rescan="0 0 * * * ? *" />
>>> </featureCollection>
>>>
>>> My use of "filter" is as follows:
>>>
>>> <filter>
>>> <include wildcard="*.nc"/>
>>> <exclude wildcard="*.data"/>
>>> <exclude wildcard="*.f"/>
>>> <exclude wildcard="*.gbx"/>
>>> <exclude wildcard="*.txt"/>
>>> <exclude wildcard="README"/>
>>> </filter>
>>>
>>> Someone want to tell me what I did wrong in each case?
>>>
>>> Thanks,
>>> -Hoop
>>>
>>>> -------- Original Message --------
>>>> Subject: Re: [thredds] Pb between OpenDap and THREDDS when netcdf file
>>>> are modifed
>>>> Date: Thu, 23 Feb 2012 22:03:38 -0700
>>>> From: Ethan Davis <edavis@xxxxxxxxxxxxxxxx>
>>>> To: thredds@xxxxxxxxxxxxxxxx
>>>>
>>>> Hi Hoop,
>>>>
>>>> The dynamic dataset handling in the NcML aggregation code was designed
>>>> to deal with the appearance of new datasets more than data being
>>>> appended to existing datasets. The NcML aggregations are also limited to
>>>> straight forward aggregations based on homogeneity of dimensions and
>>>> coordinate variables; they don't use any coordinate system or higher
>>>> level feature information that might be available. This makes straight
>>>> NcML aggregation somewhat fragile and hard to generalize to more complex
>>>> situations.
>>>>
>>>> FeatureCollections are designed to use the CDMs understanding of
>>>> coordinate systems and feature types to both simplify configuration and
>>>> make aggregations more robust and general.
>>>>
>>>> While the FMRC collection capability was designed for a time series of
>>>> forecast runs, I believe it should handle a simple time series of grids
>>>> as well. (John, can you add more information on this?)
>>>>
>>>> Ethan
>>>>
>>>> On 2/23/2012 3:21 PM, Hoop wrote:
>>>>> Ethan,
>>>>>
>>>>> This reminds me of an issue we are having, with version 4.2.7.
>>>>> Here is the relevant snippet from our config:
>>>>> <dataset name="SST NOAA OISST V2 HighRes" ID="SST_OISST_V2_HighRes"
>>>>> urlPath="Datasets/aggro/OISSThires.nc" serviceName="odap"
>>>>> dataType="grid">
>>>>> <netcdf
>>>>> xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2">
>>>>> <aggregation dimName="time" type="joinExisting" recheckEvery="15
>>>>> min">
>>>>> <scan location="/Projects/Datasets/noaa.oisst.v2.highres/"
>>>>> regExp="sst\.day\.mean\.....\.v2\.nc$" subdirs="false"/>
>>>>> </aggregation>
>>>>> </netcdf>
>>>>> </dataset>
>>>>>
>>>>> The behavior we are getting in our time series, which is based on
>>>>> NetCDF files with a year's worth of time steps (or less), is as follows:
>>>>> In between re-boots of Tomcat, new time steps added to the latest file
>>>>> are not added to the aggregation. However, if the calendar marches along
>>>>> and a new file for a new year is added to our archive without rebooting
>>>>> Tomcat, the timesteps for the new file are added, without the ones that
>>>>> would complete the previous year, resulting in a discontinuity along the
>>>>> time axis. And someone somewhere may e-mail us complaining that our
>>>>> OPeNDAP object is not CF-compliant because the time steps aren't all of
>>>>> the same size. %}
>>>>>
>>>>> I looked at the featureCollection documentation link you gave, but since
>>>>> our data are not forecasts, nor point data, nor in GRIB2 format, that
>>>>> didn't seem the right fit. Maybe I'm wrong; I'm severely sleep-deprived
>>>>> right now....
>>>>>
>>>>> We also have some time series in monthly files (to keep the individual
>>>>> file size under 2 Gbytes). We have not tried aggregating any of those
>>>>> time series. Could be an interesting challenge.
>>>>>
>>>>> Thanks for any help.
>>>>>
>>>>> -Hoop
>>>>>
>>>>> On 02/23/12 14:23, thredds-request@xxxxxxxxxxxxxxxx wrote:
>>>>
>>>>> Ethan Davis wrote:
>>>>>>
>>>>>> Hi Claude,
>>>>>>
>>>>>> The version of the TDS running at http://web.aria.fr:443/thredds/ is
>>>>>> several years old (Version 4.0.26 - 20090831.2140). The current stable
>>>>>> release is 4.2.9 (20111108.1758). You should probably upgrade as there
>>>>>> have been lots of improvements.
>>>>>>
>>>>>> The TDS does some dataset caching which can greatly improve the
>>>>>> performance for static datasets but causes some problems for dynamic
>>>>>> datasets. The datasetScan configuration construct you are using does not
>>>>>> deal specifically with dynamic datasets. A more recently introduced (in
>>>>>> TDS 4.2) configuration construct, featureCollection, can deal with
>>>>>> dynamic datasets. Here's a link to the featureCollection documentation:
>>>>>>
>>>>>>> http://www.unidata.ucar.edu/projects/THREDDS/tech/tds4.2/reference/collections/FeatureCollections.html
>>>>>>
>>>>>> Hope that helps,
>>>>>>
>>>>>> Ethan
>>>>>>
>>>>>> On 2/22/2012 8:20 AM, Claude DEROGNAT wrote:
>>>>>>> I have used OpenDap and THREDDS server to provide netcdf data since ?
>>>>>>> long time.
>>>>>>> I currently develop a system that will model plumes in real time.
>>>>>>> The model runs every 30 minutes. At the beginning of each day, it
>>>>>>> creates a result file then every 30 minutes the result file is
>>>>>>> overwritten with a new one containing the additional time frame.
>>>>>>>
>>>>>>> I observed a strange behavior between Opendap and THREDDS in this case:
>>>>>>> - The file is continuously updated in the OpenDap server.
>>>>>>> - The THREDDS catalog notices in the 'Dates' field that the
>>>>>>> file is updated (modified)
>>>>>>> - but the Access/OpenDap target file are not modified and the
>>>>>>> available time frame stay the same since the last tomcat reboot. May I
>>>>>>> have to perform any modification in the thredds configuration ? A
>>>>>>> attached you my threddsConfig.xml file...
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Ing. Claude DEROGNAT, PhD
>>>>>>
>>>>>> Claude also wrote:
>>>>>>> My IT said that the file I send you is an old one. My problem is
>>>>>>> probably link to my THREEDS catalog. The question is so probably
>>>>>>> to threeds mailing list. I send them my question on Monday just
>>>>>>> after your message and I still have no response back ...
>>>>>>>
>>>>>>> So, I send you my catalog if you can have a look to it you may
>>>>>>> find why there a mismatch between
>>>>>>>
>>>>>>> http://web.aria.fr:443/thredds/dodsC/CHIMERE@CAMAC@reference@p02/CF_CHIMERE_20120115_d03.nc.html
>>>>>>> and
>>>>>>> http://web.aria.fr:443/LENVIS/CHIMERE/CAMAC/reference/p02/CF_CHIMERE_20120115_d03.nc.html
>>>>>>>
>>>>>>> For instance,
>>>>>>>
>>>>>>> why the second is continuously updated but doesn't allow the
>>>>>>> access to the whole set of variables stored in NetCdf file.
>>>>>>>
>>>>>>> Why the first one are not continuously updated even if the date
>>>>>>> in the threeds catalogs presentation always mentioned the right
>>>>>>> modified time ...
>>>>>>
>>>>>> catalog.xml:
>>>>>>> [snip]
>>>>>>> <service name="multiService" base="" serviceType="compound">
>>>>>>> <service name="ncdods" serviceType="OPENDAP" base="/thredds/dodsC/"
>>>>>>> />
>>>>>>> <service name="httpService" serviceType="HTTPServer"
>>>>>>> base="/thredds/fileServer/" />
>>>>>>> <service name="wcsService" serviceType="WCS" base="/thredds/wcs/" />
>>>>>>> </service>
>>>>>>> [snip]
>>>>>>> <datasetScan name="CF_CHIMERE_@YYYYMMDD@_d01.nc"
>>>>>>> ID="/LENVIS/CHIMERE/CAMAC/reference/p00/dataset"
>>>>>>> path="CHIMERE@CAMAC@reference@p00"
>>>>>>> location="/data/nc/LENVIS/CHIMERE/CAMAC/reference/p00"
>>>>>>> harvest="true">
>>>>>>> <sort>
>>>>>>> <lexigraphicByName increasing="false"/>
>>>>>>> </sort>
>>>>>>> </datasetScan>
>>>>>>> [snip]
>>>> _______________________________________________
>>>> thredds mailing list
>>>> thredds@xxxxxxxxxxxxxxxx
>>>> For list information or to unsubscribe, visit:
>>>> http://www.unidata.ucar.edu/mailing_lists/
>>> _______________________________________________
>>> thredds mailing list
>>> thredds@xxxxxxxxxxxxxxxx
>>> For list information or to unsubscribe, visit:
>>> http://www.unidata.ucar.edu/mailing_lists/