Good points.
It should be currently possible to effectively
disable client-side caching by prefixing
the client parameter [fetchlimit=1] to your
URL. This is admittedly awkward, and I will
add specific client parameters, say [cache=1|0|yes|no],
to enable/disable the cache.
However, disabling the cache can cause noticeable performance
hits.
Anyone else have an opinion on this: cache disabled by default?
Jennifer Adams wrote:
Hi, Dennis --
I would argue that a better strategy for the client library would be to
minimize the size of data transfers from the servers at all times. Most
if not all OPeNDAP users are motivated by limited internet bandwidth
that makes it impossible to download entire data sets via FTP. Why pad
their requests unnecessarily? The client library should not anticipate
what the next request might be, it should get the requested data as fast
as possible. How about disabling over-requests and client side cacheing
by default, and giving the user the option to override the settings in
.dodsrc?
In the GrADS community, many unexplained problems with OPeNDAP access
were solved by deleting the client-side cache and re-issuing the
request. I don't know how the cache got corrupted, but this
problem/solution came up so many times, I ended up recommending users
abandon client-side cacheing completely.
--Jennifer
On Oct 26, 2009, at 10:11 PM, Dennis Heimbigner wrote:
The reason to download extra information is to cache it
on the client side so that some other client side request
can be addressed without contacting the server.
Currently, my netcdf client code does over-request
when that request is not "too large" as defined by a
user-configurable parameter.
=Dennis Heimbigner
Jennifer Adams wrote:
I've looked more closely at the GDS code ... although a .dods request
that is missing any constraints is rejected automatically, there is
no restriction on the size of a subset request; it is not even
configurable. I guess this is because the subsets are never cached.
If the client has the bandwidth and the patience to attempt to
download a terabyte of data in a single request, the GDS will try to
comply. The server does allow me to constrain the size of server-side
analysis results -- that's because these are cached.
In any event, even if GDS did allow a .dods request without a
constraint, I think it would be a mistake to use that syntax to get
the coordinate data values. Why download ALL data when you just need
a 1-D array for a single variable?
--Jennifer
On Oct 26, 2009, at 4:41 PM, Dennis Heimbigner wrote:
The issue is really the size of the returned dataset.
I would much prefer that the server complain that
the caller is asking for too much data (defined by some
preset limit) than forcing the addition of constraints.
Note that I can still ask for the whole dataset by just
putting in a set of constraint projections that ask for everything.
So, using the presence of constraints is not IMO a good idea.
=Dennis Heimbigner
Jennifer Adams wrote:
Hi, Dennis, James, et al. -- I didn't know before today that a GDS
request with only the ".dods" extension is automatically rejected
by the server, but it makes sense ... it would mean delivering the
entire data set -- coordinate and data variables in their entirety
-- to the client. For large data sets this would be doomed to fail.
But in order to fulfill the 'ncdump -c' request, the entire data
set is not required, only the .das and the .dds and the data values
for the coordinate variables need to be retrieved. One other detail
... when I try 'ncdump -h', the server delivers the .das and .dds
without errors, but ncdump only shows 1 out of 4 coordinate
variables. --Jennifer
On Oct 26, 2009, at 2:29 PM, Dennis Heimbigner wrote:
Jennifer-
As you discovered, some servers (including GrADS)
will not serve up a whole dataset, which is what you
may get if you ask for, for example,
http://monsoondata.org:9090/dods/model
Other servers do not require constraints as
part of the DAP request.
[James- is the GrADS behavior (requiring a constraint)
proper or improper behavior for an OPeNDAP server?]
=Dennis Heimbigner
Jennifer Adams wrote:
Dear Experts -- I am new to this group but not to NetCDF or
OPeNDAP. I have been testing netcdf-4.1 for use with GrADS. I had
noticed some problems ncdump was having in getting the attribute
metadata for the coordinate axes of an OPeNDAP data set. I tried
it with the latest snapshot (dated 2009102000) and ncdump is
working much better, but it is not yet getting it right. The data
set in question is behind a GrADS Data Server, a 4-dimensional
low-resolution demo data set used for the tutorial and testing.
Before running ncdump, I followed the recommendation from an
earlier post and set the environment variable OCLOGFILE to
<blank>. Here is the output:
ncdump -c http://monsoondata.org:9090/dods/model
# ./ncdump -c http://cola51x:9090/dods/model
Warning::USE_CACHE: not currently supported
Warning::MAX_CACHE_SIZE: not currently supported
Warning::MAX_CACHED_OBJ: not currently supported
Warning::IGNORE_EXPIRES: not currently supported
Warning::CACHE_ROOT: not currently supported
Warning::DEFAULT_EXPIRES: not currently supported
Warning::ALWAYS_VALIDATE: not currently supported
netcdf model {
dimensions:
lat = 46 ;
lev = 7 ;
lon = 72 ;
time = 5 ;
variables:
double lat(lat) ;
lat:grads_dim = "y" ;
lat:grads_mapping = "linear" ;
lat:grads_size = "46" ;
lat:units = "degrees_north" ;
lat:long_name = "latitude" ;
lat:minimum = -90. ;
lat:maximum = 90. ;
lat:resolution = 4.f ;
float ua(time, lev, lat, lon) ;
ua:_FillValue = 1.e+20f ;
ua:missing_value = 1.e+20f ;
ua:long_name = "eastward wind [m/s] " ;
float ps(time, lat, lon) ;
ps:_FillValue = 1.e+20f ;
ps:missing_value = 1.e+20f ;
ps:long_name = "surface pressure [hpa] " ;
float va(time, lev, lat, lon) ;
va:_FillValue = 1.e+20f ;
va:missing_value = 1.e+20f ;
va:long_name = "northward wind [m/s] " ;
float zg(time, lev, lat, lon) ;
zg:_FillValue = 1.e+20f ;
zg:missing_value = 1.e+20f ;
zg:long_name = "geopotential height [m] " ;
float ta(time, lev, lat, lon) ;
ta:_FillValue = 1.e+20f ;
ta:missing_value = 1.e+20f ;
ta:long_name = "air temperature [k] " ;
float hus(time, lev, lat, lon) ;
hus:_FillValue = 1.e+20f ;
hus:missing_value = 1.e+20f ;
hus:long_name = "specific humidity [kg/kg] " ;
float ts(time, lat, lon) ;
ts:_FillValue = 1.e+20f ;
ts:missing_value = 1.e+20f ;
ts:long_name = "surface (2m) air temperature [k] " ;
float pr(time, lat, lon) ;
pr:_FillValue = 1.e+20f ;
pr:missing_value = 1.e+20f ;
pr:long_name = "total precipitation rate
[kg/(m^2*s)] " ;
// global attributes:
:title = "Sample Model Data" ;
:Conventions = "COARDSGrADS" ;
:dataType = "Grid" ;
:history = "Mon Oct 26 12:59:29 EDT 2009 : imported
by GrADS Data Server 2.0" ;
data:
Error: :oc_open: server error retrieving url: code=0
message="subset requests must include a constraint expression"
./ncdump: NetCDF: DAP server side error
lat = Since I am the administrator of the server, I can look in
the logs and see what went wrong. The request that failed and led
to the error message above was: ...GET /model.dods
The request from the client (ncdump) should have the
?varname[constraints] syntax following ".dods" , like this:
...GET /model.dods?time[0:1:4] ...GET /model.dods?lev[0:1:6]
...GET /model.dods?lat[0:1:45]
...GET /model.dods?lon[0:1:71] I also note that only 1 out of the
4 coordinate axes (lat) shows up as a data variable. I hope this
is helpful information. The DAP interface in netcdf-4.1 is vital
to GrADS, I am eager to adopt it as soon as it is working
properly. --Jennifer
--
Jennifer M. Adams
IGES/COLA
4041 Powder Mill Road, Suite 302
Calverton, MD 20705
jma@xxxxxxxxxxxxx <mailto:jma@xxxxxxxxxxxxx>
------------------------------------------------------------------------
_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx <mailto:netcdfgroup@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
--
Jennifer M. Adams
IGES/COLA
4041 Powder Mill Road, Suite 302
Calverton, MD 20705
jma@xxxxxxxxxxxxx <mailto:jma@xxxxxxxxxxxxx>
------------------------------------------------------------------------
_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx <mailto:netcdfgroup@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
--
Jennifer M. Adams
IGES/COLA
4041 Powder Mill Road, Suite 302
Calverton, MD 20705
jma@xxxxxxxxxxxxx <mailto:jma@xxxxxxxxxxxxx>
------------------------------------------------------------------------
_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/