I've been using the Java Netcdf API and am running into several issues with
the temporary files that are created.
For example, when a gzipped NetCDF file is read, an uncompressed
version is written to disk before it is read. I assume that this is because
the NetCDF API allows for file seeks etc.
which wouldn't be possible if the file remained gzipped. But (and this is
the problem), the uncompressed
file is stored in the SAME directory as the original file. This causes
three major problems (in increasing order
of severity):
(a) The data directory (which would be considered read-only) is being
modified. This causes problems
because of the I/O optimizations that we do on real-time systems
(b) The temporary file is not automatically cleaned up, so these
temporaries start to fill up the disk.
But removing the temporary file when the original file is closed is
not enough because of problem (c).
(c) if there are two simultaneous programs reading a gzipped file, then a
potential for race conditions exists
The same problem seems to exist when reading a Grib1 file (a .gbx temporary
file is created).
Is there any work-around for this problem?
Lak
p.s. I suggest the consistent use of java.io.File's createTempFile() ...