NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Robert E. McGrath wrote:
On 2003.12.16 15:05 Ed Hartnett wrote:Howdy all! Another question relating to chunking - if we don't need it (i.e. for a dataset with no unlimited dimensions), do we still chunk it? Or is it better to leave it contiguous? (With the mental reservation that only chunked datasets will be able to take advantage of compression, when we get to that feature.) Thanks! EdChunking can greatly improve performance on any partial I/O: only the chunks that cover the request need to be read. For large datasets, you don't want to read the whole thing in to memory to pick out a subset. Again, chunking controls the units that will be read/written to the disk: if the dataset is much larger than reasonable read/writes, then chunking can control this. On the other hand, there is overhead for chunking, so you may want tonot use it. E.g., for a small dataset that would fit in a single chunk, whybother?
are you saying that if a dataset isnt chunked, you have to read the entire thing into memory before you subset?
netcdf-hdf
archives: