Re: [netcdfgroup] Files with compound types ARE portable!

Hi Jeff,

I feel just as confused as you are.

I don't understand why NetCDF has to store the offsets in the file, as this
is only applicable locally. If the target machine has a different memory
alignment, then the offsets are completely useless. Instead, new alignments
should be computed at the target machine (or at least the user should be
able to re-define them manually).

I have come up to this same problem because I am trying to write a generic
app in C that can write arbitrary compound types. In other words, compound
types for which I don't have a "struct" in memory. If I could pass arbitrary
offsets I could build the compound type and fill in variables. However, if
these arbitrary offsets are stored in the file, and are applicable at the
target computer, then everything is messed up.

so did you ever close this issue?

thanxs, 

Felipe


Jeff Whitaker wrote:
> 
> Ed Hartnett wrote:
>> Jeff Whitaker <jswhit@xxxxxxxxxxx> writes:
>>
>>   
>>> Concerning packing of structs, one of us is very confused about how
>>> HDF5 compound types work (and it's probably me). I thought that you
>>> could specify arbitrary offsets that do not necessarily correspond to
>>> the default alignment of your C compiler, and HDF5 would take care of
>>> everything when you read the data back in.  
>>>     
>>
>> No, sorry, this turns out not to be the case.
>>
>>   
>>> Otherwise, how would you
>>> read a file created with HDF5 on a platform with a different default
>>> alignment than the one it was written on?  Isn't the whole point of
>>> the HDF5 layer that you don't have worry about the default alignment
>>> of structs for the C compiler?
>>>     
>>
>> HDF5 can handle it, but not if you change the alignment of your struct
>> with a compiler directive!
>>
>> HDF5 figures out packing when it is built on your machine, in the HDF5
>> configure script. Using any other packing than the one HDF5 figured out
>> at its build time will result in confusion.
>>
>> So if you want a different packing of your struct, you must specify the
>> packing options you want with compiler flags, and make sure you use
>> those flags when building HDF5 (and netCDF-4, and your own program).
>>
>> I have forwarded your question to the HDF5 team to ensure that I am
>> telling you the correct answer, and to see if they can help explain this
>> any more clearly.
>>
>> Thanks!
>>
>> Ed
>>   
> Ed:  Attached is an HDF5 version of the netcdf program I sent before.   
> If you run this and look at the output with h5dump, you will see that 
> the output is the same whether the __packed__ attribute is set or not.  
> This is evidence that you should be able to align (i.e. pack) your 
> structs however you want, regardless of what packing HDF5 detected at 
> compile time. If this weren't the case, then why even require the user 
> to pass the offsets to nc_insert_compound?
> 
> It seems to me that somehow netcdf is not using the offsets the user 
> passed to nc_insert_compound correctly.
> 
> -Jeff
> 
> -Jeff
> 
> #include "hdf5.h"
> 
> int
> main()
> {
>     hid_t      s1_tid;     /* File datatype identifier */
>     hid_t      file, dataset, space; /* Handles */
>     herr_t     status;
>     hsize_t    dim[] = {1};   /* Dataspace dimensions */
> 
>     struct s1 
>     {
>           short i;
>           long long j;
>     /*};*/
>     } __attribute__ ((__packed__));
> 
>     struct s1 data[1];
> 
>     /* Create some phony data. */   
>     data[0].i = 20000;
>     data[0].j = 300000;
> 
>     /*
>      * Create the data space.
>      */
>     space = H5Screate_simple(1, dim, NULL);
> 
>     /*
>      * Create the file.
>      */
>     file = H5Fcreate("test.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
> 
>     /*
>      * Create the memory datatype. 
>      */
>     s1_tid = H5Tcreate (H5T_COMPOUND, sizeof(struct s1));
>     H5Tinsert(s1_tid, "i", HOFFSET(struct s1, i), H5T_NATIVE_SHORT);
>     H5Tinsert(s1_tid, "j", HOFFSET(struct s1, j), H5T_NATIVE_LLONG);
> 
>     /* 
>      * Create the dataset.
>      */
>     dataset = H5Dcreate (file, "phony_dataset", s1_tid, space,
> H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);
> 
>     /*
>      * Write data to the dataset; 
>      */
>     status = H5Dwrite (dataset, s1_tid, H5S_ALL, H5S_ALL, H5P_DEFAULT,
> data);
> 
>     /*
>      * Release resources
>      */
>     H5Tclose(s1_tid);
>     H5Sclose(space);
>     H5Dclose(dataset);
>     H5Fclose(file);
> 
> }
> 
> _______________________________________________
> netcdfgroup mailing list
> netcdfgroup@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe,  visit:
> http://www.unidata.ucar.edu/mailing_lists/ 
> 

-- 
View this message in context: 
http://n2.nabble.com/files-with-compound-types-not-portable-tp2671845p4025082.html
Sent from the NetCDF Group mailing list archive at Nabble.com.



  • 2009 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: