Hi John,
So long as you don't break simple stateless requests, I'm happy with
whatever you choose in order to provide stateful behaviour also -- I can
see the need for this now. The rest of this email contains my thoughts
about other ways to look at the problem.
I wonder why this is the case. What is the new file being added? Could
you please explain exactly what's being added to me? Why are new files
being added?
As for the deletion at 0Z, I would ask whether the request is for
"Latest" or for a specific date. I don't see why files for a specific
date would be removed, for example.
However, I'll just assume you do need to do what you're doing for the
moment. I guess a session is the only way. However, not all clients are
going to be aware, so you might still need some way to handle things on
the server when people *do* ask for things without a session.
What about introducing an "unlimited" vector into the aggregation? If
you joined along a new dimension, then requests (for example) the first
10 records would always come back with the same thing, even though more
data might now be available "at the other end". If you know your data is
going to be highly dynamic, then this doesn't seem unreasonable. You
might even implement a new kind of request for new additions to an old
aggregation.
I've quoted a bit from your other email, which prompted me to go an
re-examine this one.
>The problem is; everytime you do a data request, will you examine the
entire DDX and possibly some of the data like coordinate systems to make
sure nothing has changed?
I wouldn't bother usually, but if I was setting something up to track
the changes I *could* do it. I'm also thinking of the scenario where a
user is asking for the file as NetCDF and saving to local disk. If the
information only exists in the DDX, might not that be a problem?
Here's another option -- what if each DDX contained a unique identifier
-- such as the date and time to high precision? Further requests would
include this as a "currency" indicator. The server would then only
aggregate files which themselves have a creation date before that indicator.
Cheers,
-T
John Caron wrote:
Greetings dods-techies:
Im reworking the agg server, and trying to solve all the complex
problems that i ignored in the first version.
So I have an aggregation that contains, say 700 files, each is one
time point. Every 15 minutes another file is added. Once a day at 0Z,
the previous days files (about 100) are deleted.
The problem is that a client might come in, gets the DDS and sees a
variable with dimension 700. Then they make a request for the first 10
values. Meanwhile, 100 files have been deleted, so they get the wrong
10 back. Other problems also are possible.
Because DAP is stateless, I dont see any way for the server to realize
that the request is based on an old DAP that is now wrong.
I dont see any way around this except to add a notion of a "session".
A server could optionally send a session cookie, which the client must
pass back on each request. Only servers that need sessions would have
to implement this.
With a session, I could keep track of when the aggregation dataset has
been modified, and do something intelligent.
Sessions would also help make authentication more efficient.
Anyone have any thoughts?