On Tue, May 20, 2014 at 8:43 AM, Rob Latham <robl@xxxxxxxxxxx> wrote:
> I notice that the converted NetCDF file is always double the size of the
>> ASCII file whereas I was hoping for it be to much reduced. I was
>> therefore wondering if this is expected or is more due to my bad
>> representation in NetCDF of the ASCII records?
>
>
it's not going to be smaller unless you compress it (which might be a good
idea...)
But I can see that you wouldn't expect, or want, a much larger file.
netcdf4 used hdf5 under the hood, and hdf5 can have a fair bit of overhead
in the file so support its nifty features like compression and multiple
unlimited dimensions.
when working with these files, chunking can have a big impact on both
performance and file size.
until the latest release, the defaults for chunk size were not very good
for your use case (resulting in large and slow to write files).
So I'd look into adjusting the chunk size and see what that does for you.
-Chris
> I am using
>> nc_put_vara_text() to write my records. Maybe I need to introduce
>> compression that I’m not doing already?
>>
>
> Are you using the classic file format or the NetCDF-4 file format?
>
> Can you provide an ncdump -h of the new file?
>
> ==rob
>
>
>> Thanks in advance for any advice you can provide.
>>
>> Regards,
>>
>> Tim.
>>
>>
>> _______________________________________________
>> netcdfgroup mailing list
>> netcdfgroup@xxxxxxxxxxxxxxxx
>> For list information or to unsubscribe, visit:
>> http://www.unidata.ucar.edu/mailing_lists/
>>
>>
> --
> Rob Latham
> Mathematics and Computer Science Division
> Argonne National Lab, IL USA
>
> _______________________________________________
> netcdfgroup mailing list
> netcdfgroup@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe, visit:
> http://www.unidata.ucar.edu/mailing_lists/
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker@xxxxxxxx