I've been using NFS successfully for years... YMMV. We saw degrading
performance with NFS until migrating to NFS3 with TCP connections. Then,
we saw some improvement with NFS4 but recent problems therein have sent
us back to NFS3, and right now, we are auspicious that some crippled
code has made its way into the current group of distros. A colleague
with more spare time than I has been looking into this, but has not
reported a finding yet. He HAS stopped using SuSE and gone to CentOS (a
seismic shift for him) because of this, and the latest KDE decisions in
SuSE.
OF late, though, we've started using Gluster (parallel file system) with
rather stellar results for our limited LDM and web infrastructure. We
also use Gluster on a grand scale in our HPC operations. I've been most
pleased by its performance for LDM purposes.
gerry
Tyler Allison wrote:
I refuse to use NFS. It's even worse than ext3 for IO :)
-Tyler
On Tue, Oct 26, 2010 at 12:12 PM, Arthur A. Person <person@xxxxxxxxxxxxx> wrote:
Robert,
On Tue, 26 Oct 2010, Robert Mullenax wrote:
I personally think that most folks could benefit from spending less on
tons of RAM and instead going to SAS instead of SATA systems.
SATA is certainly better than old IDE, but if you have multiple feeds
coming in, decoding of data, and clients getting data via NFS,
then SAS is far better.
agreed... except it's cost-prohibitive for large arrays.
Art
-----Original Message-----
From: ldm-users-bounces@xxxxxxxxxxxxxxxx on behalf of Arthur A. Person
Sent: Tue 10/26/2010 8:31 AM
To: Jeff Lake - Admin
Cc: LDM Users
Subject: Re: [ldm-users] high memory
Jeff,
Depending on how heavily your system is loaded, you may be piling up too
much I/O onto one array. The LDM queue itself can be pretty demanding of
the array, and then you have data coming off the disk out of the queue
again to be decoded (although some/most of that should be cached) and then
you're writing data back out to the array filing raw/decoded data. It all
comes down the the number of I/O's/second your device can handle... if
it's just two SATA disks in a mirror, it's limited. The first thing I
would try is moving your LDM queue to your third drive and then monitor
I/O's/second to the drives during peak use.
Art
On Tue, 26 Oct 2010, Jeff Lake - Admin wrote:
Do you have separate system disk from data, or does everything run off
one
mirror? How many disks are in your mirror? Is your LDM queue running
off
the same array as the OS and data?
Art
-sh-3.2$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 480G 88G 392G 18% /home2
/dev/sda5 448G 72G 353G 17% /
/dev/sda2 2.0G 36M 1.9G 2% /tmp
/dev/sda1 99M 13M 81M 14% /boot
tmpfs
ldm, sql, and assorted scripts on /
ldm queue and data directory on /home2
I do have a 3rd drive that I haven't mounted yet
Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email: person@xxxxxxxxxxxxx, phone: 814-863-1563
_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email: person@xxxxxxxxxxxxx, phone: 814-863-1563
_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/
_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/
--
Gerry Creager -- gerry.creager@xxxxxxxx
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843