Performance problem with large files

I have been using netCDF for quite a while now, but this week I worked
for the first time with really big files: I am reading from one 1.6 GB
file and writing to another one. The data in the files is essentially
one single-precision float array of dimensions 8000 x 3 x 16000, the
last dimension being declared as "unlimited". I read and write
subarrays of shape 1 x 3 x 16000. My computer is a Pentium II
biprocessor machine at 450 MHz and with 512 MB of RAM, running Linux.

My problem is that this is not only extremely slow (slower by a factor
2000 than doing the same on a file of a hundredth the size), but
periodically blocks my computer in that all programs wanting to do
some disk access have to wait for about five seconds until some
operation is finished. And my office neighbour is complaining about
the never-ending noise from the disk.

Is there anything I can do do improve the performance of such
operations? The blocked disk access makes me think that the critical
operation happens in the Linux kernel, but I am not sure. I'd
appreciate any advice from people who are more experienced with huge
data files.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@xxxxxxxxxxxxxxx
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------

  • 1999 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: