I Heart Valgrind

How do I love Valgrind? Let me count the ways.

Valgrind is a neat little tool that replaced the memory handling routines of the operating system with specially instrumented ones that also keep track of everything you are doing with memory. Then, if you don't free it, Valgrind can tell you.

All of this will seem unspeakably primitive to our Java programming friends. Sorry to bring up such a barbaric topic as memory management.

Like any such tool, when Valgrind was first used on netCDF code it issued many warnings and error reports. Most were actually warnings and memory errors in the test programs themselves (which don't get the kind of attention that the library code does - who tests their tests?) But some of the Valgrind messages pointed to real memory bugs in either HDF5 or netCDF-4.

The HDF5 team has been very pro-active in hunting down all the memory problems this process has uncovered, and since 1.8.4 have been tightening up memory handling by HDF5. Meanwhile I have been doing the same for netCDF-4 code.

The result is that (in my branch of the repository - soon to be merged into the main branch) there are very few memory leaks of any kind, and almost all the libsrc4 test programs now pass Valgrind with no errors or warnings. These changes will be part of the performance and bugfix release 4.1.2.

I love Valgrind because all previous tools I've used for this have been rather clumsy. Valgrind is the easiest way to memory test a program!

Data Format Summit Meeting

Last week, on Wednesday, the Unidata netCDF team spent the day with Quincey and Larry of the HDF5 team. This was great because we usually don't get to spend this much time with Quincey, and we worked out a lot of issues relating to netCDF/HDF5 interoperability.

I came away with the following action items:

  • switch to WEAK file close
  • enable write access for HDF5 files without creation ordering
  • deferred metadata read
  • show multi-dimensional atts as 1D, like Java
  • ignore reference types
  • try to allow attributes on user defined types
  • forget about stored property lists
  • throw away extra links to groups and objects (like Java does)
  • work with Kent/Elena on docs for NASA/GIP
  • hdf4 netCDF v2 API writes as well as reads HDF4. How should this be handled?
  • John suggests not using EOS libraries but just recoding that functionality.
  • HDF5 team will release tool for those in big-endian wasteland. It will rewrite the file.
  • should store software version in netcdf-4 file somewhere in hidden att.
  • use HDF5 function to find file type, this supports user block
  • read gip article
  • update netCDF wikipedia page with format compatibility info
  • data models document for GIP?

I have been assured that this blog is write-only, so I don't have to explain any of he above, because no one is reading this! ;-)

The tasks above, when complete, with together add up to a lot more interoperability between netCDF-4 and existing HDF5 data files, allowing netCDF tools to be used on HDF5 files.

Unidata Developer's Blog
A weblog about software development by Unidata developers*
Unidata Developer's Blog
A weblog about software development by Unidata developers*

Welcome

FAQs

News@Unidata blog

Take a poll!

What if we had an ongoing user poll in here?

Browse By Topic
  • feed AWIPS (17)
Browse by Topic
« November 2010 »
SunMonTueWedThuFriSat
 
1
3
4
5
6
7
8
9
10
11
12
13
14
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today