NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Hi Ed, > > Bitfields are a black sheep in the datatype family and aren't terribly > > well documented (which we're trying to work on). Say something if you think > > we've got a terrible gap about them somewhere. > > Well I know terrible gap about them in my brain... :-) > > > Is there an example somewhere about using bitfields in HDF5? > > Hmm, you can look in the test/dtypes.c for some examples of using them. > > Search for "H5T_STD_B"... > > > > OK, here's what I'm seeing about creating a bitfield... > > hid_t st=-1, dt=-1; > st = H5Tcopy(H5T_STD_B16LE); > H5Tset_precision(st, 12); > H5Tset_offset(st, 2); > > Does this pretty much sum it up? I H5TCopy an integer type big enough > to hold it, and then set precision and offset? Yes, that's pretty much all. > > > Or can you just tell me what functions would be used to create a > > > bitfield? > > The H5Tset_precision() routine determines the number of bits in a > > datatype > > that are significant within it. > > > > > Limits on number of bits? > > Up to the size of the datatype that contains it (which is defined for up > > to 64-bit datatypes currently). > > > > > How are these stored then? Any sort of padding or what? > > We currently don't pack them, so a 13-bit field in a 32-bit datatype > > still > > takes up 4 bytes of space. Frankly, I think this is a bit of a bug, but > > it's a fairly complicated problem to pack the bits on disk (in light of > > using > > bitfields in compound, array and variable-length datatypes mostly) and noone > > has whined strongly about it, so its been the status quo for a while now. > > :-/ > > Ah ha! That sounds important. > > I think storage (and transmission) efficiency is what this whole > feature is about for Russ... > > Russ, is that correct? The goal here is to store and move large > amounts of bitfield data efficiently? > > Otherwise, what is the point of a bitfield in C/C++ or fortran 77? I > don't know about F90 - does it have a good way to deal with bitfields? > > Perhaps we should ask whether compression is a better thing to use > to achieve storage efficiency? It would be fairly straightforward to implement a pipeline filter that "compressed" data by packing out the unused bits for bitfield datatypes. (At least for non-compound/array/variable-length combinations :-). Quincey
netcdf-hdf
archives: