[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: UNLIMITED long long



Hi Jean,

For netcdf-3 64-bit-offset format, the number of records can be 2^32 - 1, which is 4,294,967,295 as the maximum size for the UNLIMITED dimension, the largest unsigned integer that can fit in the 32-bit slot for number of records in the classic or 64-bit-offset format.

I'm not sure why you're getting a segmentation fault.  I've attached a C program that creates a file with 3,900,000,000 records.  The program finishes and prints a line indicating success, which you can check using "ncdump -h" on the file it creates.  If you run this program and it gets a segmentation fault, then maybe your file system is not configured for large files.  See the answer to the FAQ 
Why do I get an error message when I try to create a file larger than 2 GiB with the new library?

If you change the format to netCDF-4 classic model by changing "NC_64BIT_OFFSET" to "NC_NETCDF4 | NC_CLASSIC_MODEL" in the nc_create() call and also increase NUM_RECS, you can run the same program to create files with many more records (2^63 - 1 ?), probably enough to fill up your file system.  This is because the HDF5 library uses a 64-bit type for unlimited-dimension sizes.

--Russ




On Tue, Jul 29, 2014 at 2:37 PM, <address@hidden> wrote:
I have gcc code that I've been using for a long time that creates a file like this:
netcdf ki_040_b_ha {
dimensions:
        lon = 646 ;
        lat = 720 ;
        grid_lon = 2581 ;
        grid_lat = 2879 ;
        time = 1441 ;
        index = UNLIMITED ; // (190827214 currently)
variables:
        int start(lat, lon) ;
                start:long_name = "Starting Index" ;
                start:_FillValue = -1 ;
                start:missing_value = -1 ;
        int end(lat, lon) ;
                end:long_name = "Ending Index" ;
                end:_FillValue = -1 ;
                end:missing_value = -1 ;
        int start_time(lat, lon) ;
                start_time:long_name = "Time of Starting Index" ;
                start_time:_FillValue = -1 ;
                start_time:missing_value = -1 ;
        double lon(lon) ;
                lon:long_name = "longitude" ;
                lon:units = "degrees_east" ;
                lon:point_spacing = "even" ;
        double lat(lat) ;
                lat:long_name = "latitude" ;
                lat:units = "degrees_north" ;
                lat:point_spacing = "uneven" ;
        float grid_lon(grid_lon) ;
                grid_lon:long_name = "Grid Longitude" ;
                grid_lon:units = "degrees_east" ;
                grid_lon:point_spacing = "even" ;
        float grid_lat(grid_lat) ;
                grid_lat:long_name = "Grid Latitude" ;
                grid_lat:units = "degrees_north" ;
                grid_lat:point_spacing = "uneven" ;
        float bathymetry(grid_lat, grid_lon) ;
                bathymetry:long_name = "Grid Bathymetry" ;
                bathymetry:standard_name = "depth" ;
                bathymetry:units = "meters" ;
                bathymetry:missing_value = -1.e+34f ;
                bathymetry:_FillValue = -1.e+34f ;
        float deformation(grid_lat, grid_lon) ;
                deformation:long_name = "Grid Deformation" ;
                deformation:units = "meters" ;
                deformation:missing_value = -1.e+34f ;
                deformation:_FillValue = -1.e+34f ;
        float max_height(lat, lon) ;
                max_height:long_name = "Maximum Wave Amplitude" ;
                max_height:units = "cm" ;
                max_height:missing_value = -1.e+34f ;
                max_height:_FillValue = -1.e+34f ;
        float travel_time(lat, lon) ;
                travel_time:long_name = "Travel Time" ;
                travel_time:units = "hours" ;
                travel_time:_FillValue = -1.e+34f ;
                travel_time:missing_value = -1.e+34f ;
        double time(time) ;
                time:units = "seconds" ;
        byte ha(index) ;
                ha:units = "cm" ;
                ha:long_name = "Wave Amplitude" ;
 ...

and now I have files with longer variables UNLIMITED = 3832812000. I've updated the code to netcdf4 from classic and converted INTs to LONG LONGs(NC_INT64) and the code runs with the shorter files but with the longer ones I get a Segmentation fault. Do you have any suggestions on where I can look for the problem? Can I have a 1D array w/ UNLIMITED = 3832812000 values? Or arrays with longlong values?
Thank you for your time.  Jean
 ___________________________________________________________
  Jean Newman                             Tel: 206-526-6531
  NOAA Center for Tsunami Research
  NOAA/PMEL/OERD2 - UW/JISAO              FAX: 206-526-6485
  7600 Sand Point Way NE, Bldg. 3    address@hidden
  Seattle, WA 98115-6349                         address@hidden
 _________________________. URL: http://nctr.pmel.noaa.gov/
                         (__________________________________



--
Russ Rew
UCAR Unidata Program

/* Demonstrate writing a netCDF file with lots of records */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>             /* for memset() */
#include <netcdf.h>

/* This is the name of the data file we will create. */
#define FILE_NAME "large-unlim.nc"

#define NDIMS 2
#define FILL_SIZE 10000000
#define NUM_RECS  3900000000    /* 3,900,000,000 records */

/* Handle errors by printing an error message and exiting with a
 * non-zero status. */
#define ERRCODE 2
#define ERR(e) {printf("Error: %s\n", nc_strerror(e)); exit(ERRCODE);}

int
main()
{
   size_t i;
   int ncid, u_dimid, varid;
   int dimids[NDIMS];
   size_t start, count;
   char *data;
   int retval;

   data = malloc(FILL_SIZE * sizeof(char));
   memset(data,42,FILL_SIZE * sizeof(char));

   /* Create the file. */
   if ((retval = nc_create(FILE_NAME, NC_64BIT_OFFSET, &ncid)))
      ERR(retval);

   /* Define the UNLIMITED dimension and an associated variable. */
   if ((retval = nc_def_dim(ncid, "unlim", NC_UNLIMITED, &u_dimid)))
      ERR(retval);
   dimids[0] = u_dimid;
   if ((retval = nc_def_var(ncid, "bigvar", NC_BYTE, 1, 
                            dimids, &varid)))
      ERR(retval);
   if ((retval = nc_enddef(ncid)))
      ERR(retval);

   /* Write data to the file in batches of FILL_SIZE records at a
      time. */
   for (i = 0; i < NUM_RECS; i += FILL_SIZE) {
       start = i;
       count = FILL_SIZE;
       printf("writing %d records starting at %ld\n", FILL_SIZE, i);
       if ((retval = nc_put_vara(ncid, varid, &start, &count, data)))
           ERR(retval);
   }

   if ((retval = nc_close(ncid)))
      ERR(retval);

   printf("*** SUCCESS writing %ld records in example file %s!\n", i, 
FILE_NAME);
   return 0;
}