[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Suggested LDM updates



Hi Bob,

Some of us have been discussing your requests.  Let me address them
individually and then say something about our future plans.

> 1.  Change CDC checksum method
>
> Currently the CDC checksum, which is used in part to filter duplicate
> messages, works on only the data part of the message.  Paul Hamer suggests
> that the calculation include elements of the prod_info structure.  He
> talked with  Glenn Davis about this and it was thought at the time to be a
> good idea.
>
> This would allow us to relabel the product and insert it back into the
> product queue.  One example of this is our retransmission of UPS ACARS
> data that is filtered in NIMBUS, relabeled and distributed to NCAR.
> We presently need to different machines/queues to do this.
>

I'm unclear about this need.  Would you explain this further?  Is it that you
want the same data product to follow two different paths?

Have you considered modifying the product for the purpose of redistributing
it?  You need only change or add one byte of the product.  For example, if
none of your downstream sites use the sequence number of the product that's
something that could be modified.  The sequence number consists of three
bytes somewhere in the first, oh, dozen bytes of the product.  Let me know if
you want more info about this.

> 2.  Product-specific "allow" mechanism
>
> I know that Joan and Amenda have mentioned this one ever since v5
> eliminated this capability...
>
> We have several proprietary data sets that we distribute via LDM to
> particular users.  Unfortunately, the relatively coarse allow-by-FeedType
> in v5 makes it difficult to control the data going out.  We would much
> prefer having the option of finer control over who gets what data by
> data identifier.

We're speculating about why this was taken out but we don't know for sure.
In thinking about it, it seems that there would be a significant performance
issue (if not greater problems) in ensuring that a product matches both what
the downstream site is requesting and what the upstream site is willing to
send.   Maybe it was problem if there was no intersection between the two
sets - some old documentation alludes to this possibility.  If that test were
done on every product, but only a small number of products actually matched
both patterns, that would be costly wrt performance.

We were also speculating about a philosophical issue of trust, that is, if a
site can recieve a particular feedtype, why shouldn't it receive all the
products of that feedtype?  However, it seems clear that being limited to a
very small number of feedtypes so that a single feedtype actually encompassed
many different products would make that philosophy difficult in practice.  I
assume that in your case you can't just rely on your downstream sites to only
request what they are allowed to receive.


For the next version of the LDM we are considering taking a pretty different
approach.  Currently we are considering using news server technology.  Users
would subscribe to data.   Data would be categorized according to an almost
limitless hierarchy, so we wouldn't be constained by a handful of feedtypes
like we are today.    We would have lots of feedtypes and we could thus
define groups of products with a finer granularity.  Do you think that having
a very large number of feedtypes would address both of your problems?

Anne

--
***************************************************
Anne Wilson                     UCAR Unidata Program
address@hidden                  P.O. Box 3000
                                  Boulder, CO  80307
----------------------------------------------------
Unidata WWW server       http://www.unidata.ucar.edu/
****************************************************