Enhancing the netCDF C++ Library and the Siphon Package

Aodhan Sweeney
Aodhan Sweeney

This summer at Unidata I worked on expanding functionality for both the netCDF C++ library and the Python data access tool Siphon. Previously, the netCDF C++ library was lacking important functionality that was included in other netCDF libraries. Fortunately, adding this functionality is a straightforward process. I created function wrappers in the C++ library that would call previously made functions in the C library. This allows those working in a C++ framework to continue to use the netCDF libraries without sacrificing additional functionality.

[Read More]

NetCDF Zarr API

This document defines the variant of the netcdf-c library API that can be used to read/write NCZarr dataset. Additionally, any special new flags or other parameter values are defined. It is expected that this document should be consistent with the NetCDF ZARR Data Model Specification [1].

[Read More]

NetCDF ZARR Data Model Specification

This document defines the initial netcdf Zarr (NCZarr) data model to be implemented. As the Zarr version 3 specification progresses, this model will be extended to include new data types.

[Read More]

NCZarr Overview

The Unidata NetCDF group is proposing to provide access to cloud storage (e.g. Amazon S3) by providing a mapping from a subset of the full netCDF Enhanced (aka netCDF-4) data model to one or more existing data models that already have mappings to key-value pair cloud storage systems.

The initial target is to map that subset of netCDF-4 to the Zarr data model [1]. As part of that effort, we intend to produce a set of related documents that provide a semi-formal definition of the following.

[Read More]

Chunking Algorithms for NetCDF-C

Unidata is in the process of developing a Zarr [] based variant of netcdf. As part of this effort, it was necessary to implement some support for chunking. Specifically, the problem to be solved was that of extracting a hyperslab of data from an n-dimensional variable (array in Zarr parlance) that has been divided into chunks (in the HDF5 sense). Each chunk is stored independently in the data storage -- Amazon S3, for example.

The algorithm takes a series of R slices of the form (first,stop,stride), where R is the rank of the variable. Note that a slice of the form (first, count, stride), as used by netcdf, is equivalent because stop = first + count*stride. These slices form a hyperslab.

The goal is to compute the set of chunks that intersect the hyperslab and to then extract the relevant data from that set of chunks to produce the hyperslab.

[Read More]
Unidata Developer's Blog
A weblog about software development by Unidata developers*
Unidata Developer's Blog
A weblog about software development by Unidata developers*

Welcome

FAQs

News@Unidata blog

Take a poll!

What if we had an ongoing user poll in here?

Browse By Topic
  • feed AWIPS (17)
Browse by Topic
« April 2024
SunMonTueWedThuFriSat
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today