Re: slow write on large files

Hi,

Some file systems are much slower rewriting than they are writing in the 
first place, often because rewriting requires a read/modify/write if the 
whole block isn't being written (which it probably wasn't, given the 
description of your workload).

Other file systems are slow if blocks haven't already been allocated, so 
there's no one "right" way to try to get the best performance...

Regards,

Rob

On Fri, 11 Jun 2004, James Garnett wrote:

> Thanks to a nudge from Russ Rew, I've solved the problem.
> 
> I can now write files right up to the 2GB limit without problems.
> 
> Using nc_set_fill() with the parameter NC_NOFILL makes the problem go away.
> I can't claim to understand why, but it works.  Also, this has the added
> benefit of creating the file very quickly at inital creation since it
> doesn't fill in the variable section of the file.
> 
> I would think that a file prefilled with dummy data could be written to
> faster than a file that has not been prefilled, but I'm apparently wrong.
> I don't know if this situation is specific to Windows, or if it will show
> up in on other platforms as well.

  • 2004 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: