That's great! Does the library use a chunk of memory to do this? If my
array of doubles is very large, say 20Gb, and I'm running on a system
with 24GB of memory, will it work?
--Jennifer
On Nov 29, 2010, at 9:32 AM, Jim Edwards wrote:
As long as your netcdf file defines the variable as float, you can
skip steps 1 and 2 and call nc_put_vara_double() and netcdf will do
the conversion internally.
On Mon, Nov 29, 2010 at 7:29 AM, Jennifer Adams <jma@xxxxxxxxxxxxx>
wrote:
Dear Experts,
It says in the documentation, "...if you write a program that deals
with all numeric data as double-precision floating point values, you
can read netCDF data into double-precision arrays without knowing or
caring what the external type of the netCDF variables are." This is
very handy, and I wonder if there is any way to use this kind of
easy type conversion when writing out data.
When my program that deals with all numeric data as double-precision
needs to write out floating point values, I use this algorithm:
1. allocate a new array of floats equal in size to the array of
doubles
2. loop over elements in the arrays and explicitly cast the data
values from double to float
3. call nc_put_vara_float() to write the floats to the output file
Is there a way to do this that would not require the memory
allocation in step 1?
--Jennifer
--
Jennifer M. Adams
IGES/COLA
4041 Powder Mill Road, Suite 302
Calverton, MD 20705
jma@xxxxxxxxxxxxx
_______________________________________________
netcdfgroup mailing list
netcdfgroup@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/