NOTE: The galeon
mailing list is no longer active. The list archives are made available for historical reasons.
Hi all, Many important topics have come up in these discussions. In this note, I'm going to try to confine my remarks to the OPeNDAP question. Within the WCS, the JPEG2000 community has proposed both a JPEG2000 extension standard and one for JPIP. I am not an expert on JPEG or JPIP but my understanding is that JPEG2000 is similar to our CF-netCDF extension in that it is an encoding specification whereas JPIP is actually an access protocol and as such it is similar to OPeNDAP. I had been thinking of that dual approach as a model for what we might eventually do for netCDF and OPeNDAP in the WCS world. But my impression is that others think that we should abandon the CF-netCDF encoding spec. and ONLY be proposing CF-OPeNDAP. Is that the heart of the suggestion that's on the table in terms of the OPeNDAP part of the discussion? -- Ben On Wed, Sep 24, 2008 at 4:57 AM, Jon Blower <j.d.blower@xxxxxxxxxxxxx>wrote:
Hi Ben,The idea is that, with CF-netCDF as a WCS extension standard, any group that wants access to our FES (or metocean) data will have a standard, carefully-defined interface through which to access that data. The fact that it won't be part of the core is irrelevant.I don't really see the difference between using (say) OPeNDAP and using an "extended" WCS. Both are likely to be foreign to a user that only understands "core" WCS. I admit that an extended WCS is likely to be closer to core WCS than OPeNDAP is. But OPeNDAP has a massive head start in terms of tooling, so the actual effort required to talk to an OPeNDAP server is not likely to be any greater than the effort required to talk to an extended WCS (in fact, I think talking to OPeNDAP will be considerably easier). I cite as an example Roy's Environmental Data Connector which reads data from THREDDS servers (via OPeNDAP) into ArcGIS. It works, it already exists, and it does the job (apparently anyway, I haven't tried it myself! ;-).To emphasize the point, I'll call your attention to the fact that no encoding formats will be part of the WCS core. They will all be extensions.OK - but I'd like to explore this a little. The purpose of a standard is to reduce the total amount of code that needs to be developed and tested. To get data from any kind of web service a client has to (1) formulate a request, and (2) understand the response. If the request syntax and semantics are always the same (in WCS core and all extensions) then we can save money by reusing the code needed to formulate the request. However, if the extensions define a modified syntax or semantics, we have to develop new code for each extension. I'm ignorant here - do you think that the request syntax will be modified by WCS extensions? In terms of understanding the response (2) the core-plus-extension model hasn't helped at all. Every client will need the means to understand the file format coming from the extension in question - we are unable to reuse any code. (Also, note that if a client has the means to understand a NetCDF file, it probably also has the means to talk to OPeNDAP.) WMS has been (relatively) successful partly because it's an easier problem, but I would argue that a large part of its success is down to the fact that there are a smallish number of very widely-used image formats (PNG, GIF, JPEG) that almost everyone can interpret. In other words, the interoperability is largely enabled by its adoption of standard output formats. The same can't be said of WCS and this worries me a bit. Formulating the request isn't usually the hard part - it's usually much harder to understand a new file format (in my experience). I should conclude by stating that none of this is a criticism of Galeon - in fact, without Galeon we wouldn't be able to have any kind of informed conversation about this. I am more worried about the WCS world in general. Best wishes, Jon
galeon
archives: