NOTE: The galeon
mailing list is no longer active. The list archives are made available for historical reasons.
Hi John, I don't the the exchange boils down to "simple versus complex" standards. As I've said in other emails, it boils down to return on investment. Sure, KML is a lousy format for most real data but it's great for geobrowsing (which is what it's designed for). The point is that we shouldn't need to have to adopt a huge new infrastructure to fulfil simple requirements (and there's nothing to stop you using linking your KML to your more complex system for those users who want to drill down. Revealing complexity gradually is a Good Thing). I don't know anything about SOS but it seems to fill a gap in the market, which is great. It means that any progress in this area is likely to be fruitful. However, the case of WCS is different because we already have two very well-established means for sharing gridded CF-NetCDF data that are robust and backed up with tools: 1) OPeNDAP 2) Simply putting CF-NetCDF files on a website (I'm not being flippant, a large number of customers don't need any more than this) So we need to be very clear about what wins we get from WCS that we can't get already. Jon On Thu, Oct 9, 2008 at 6:15 AM, John Graybeal <graybeal@xxxxxxxxx> wrote:
Maybe I'm just being silly, but this feels like a fruitless exchange at some level. (So I might as well add my 2 cents too....) It's as if we were to say "sure, XML (or ASCII, for that matter) is nice and generic, but who can write a parser that deals with all those possible XML files?" (Well, a lot of people, it turns out -- but usually we narrow the field a little bit for each application domain.) My experience of "easy to use" standards is that they tend to be useless (by themselves, anyway) for the purposes for which I want to develop data systems (namely, the so a computer program can find many kinds of things of interest and incorporate or process those things automatically -- see the Ocean Observing Initiative's Cyberinfrastructure Concept of Operations for a detailed use case of this). DIF, FGDC, KML -- they all have a lot of uptake, and they all provide a *certain level* of interoperable value, each in their own way. But each lacked specificity in some areas that were needed to provide computability at the level I want it. Just like XML, FGDC and KML are extensible, and you can build your own solutions on top of them and propose them for the interoperability winner. This has value in each case, though the pervasive lack of controlled vocabularies in FGDC, and the proprietary implementation environment in KML, were enough to make them not a winner for me. But hey, YMMV, that's fine. With SensorML and O&M, I found a model that matched my own, was largely internally consistent and relatively robust under new applications, was more thought out than my own in some places, and already had a fair number of people interested. Yes, some practical refinement has been needed as we go forward, but nothing like the refinement I've had to do with many other standards. The computability and interoperability of the result across a wide range of system implementations seems considerably more refined than I experienced with other standards. (Though maybe not as high as netCDF/CF, within its niche.) These are the things that the more complex standard offers. My point is not that SOS wins; my point is there inevitably will be tradeoffs between simplicity and functionality. You can always take the simple solution first, but by the time you graft on the capability this person and that person and the other person wants, you will have something fairly equivalent to the complex standard that this thread seems to be dismissing. So I don't see that it's a meaningfully decidable discussion. John
galeon
archives: