NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Hi John, > the motivation for me would be to use bit-packing as a storage format, > not a data type. we would add an option to pack wider types (usually > float/double) using a scale/offset. this can get you a factor of 2-4 or > so, whereas compression may not get you anything. > > however, this would only work if it remains a valid hdf5 file. It would > be most useful if we can do arbitrary bit widths, but still useful if we > are limited to multiples of 8. If we used a pipeline filter to implement this as a compression mechanism, the file would be fine and the higher levels would be unaware of any change in the storage format. It still would require some serious thought about how to handle bitfields in complicated datatypes. Also, you seem to be branching in two different directions - just extracting the bitfield from the type to make things smaller on disk and a different "compression" mechanism which performed scale/offset operations on data as it was written to disk. This first would just be a bit operation where the latter would be arithmetic operations. Quincey
netcdf-hdf
archives: