Re: [ldm-users] Best linux file system for data on large raid5array?

Robert,

On Wed, 10 Oct 2007, Robert Mullenax wrote:

Art, Pete, et. al.

With the advent of OpenSolaris, Solaris and therefore ZFS is no longer proprietary (that doesn't include CDE and a few other things). I have zero interest (and zero capability) of building or tweaking my own OS and thus just install Solaris 10, but you can get the complete source code to Solaris including ZFS and roll your own.

Perhaps proprietary wasn't the right word. Solaris is now open source which means we can look at the code, but it still comes with a license which controls its use. Solaris, including ZFS, are licensed under Sun's CDDL license which is considered to be incompatible with the GPL license under which Linux is distributed. Therefore, it's not possible for ZFS to be included in the Linux kernel without a complete re-write which would take a long time. So... we might just have to start running Solaris (again)...

ZFS is intended for use with JBOD, it is really intended to replace Hardware RAID, but you can use it with hardware RAID. Tests as shown that in general ZFS outperforms HW RAID. It is RAM hungry and works best on 64-bit machines. Currently you cannot have a boot disk with ZFS, it's only for data disks. That capability is under testing right now and might be in the latest OpenSolaris releases.

Okay, that's good to know. I've read a couple of articles that show ZFS to be comparable (or better) than existing software-raided file systems, but there also appears to be a higher demand on the cpu to handle extra housekeeping functions. I suspect it's worth the penalty to get a bullet-proof file system. I also read that Sun may be considering a cluster version of ZFS but that would be a few years off.

I use ZFS here at CSBF with all our machines. I have stopped trying to be any sort of Solaris advocate, but if you really need this thing to be reliable then Solaris+ZFS is the only way to go, IMHO. I frankly have never understood the business model of paying engineers to develop something (ZFS) only to give it away, but I suppose ZFS should be running in some form on Linux now, although I would not imagine it's ready for prime time.

There's a version ported to FUSE Linux which implements the drivers in user space, but it's questionable whether the FUSE platform will ever be stable enough for operational use.

If it were me I'd just download and install Solaris 10 Update 4 for free,

That's exactly what we're going to try as soon as I get a free minute. Do you know if there are any device issues with Solaris in terms of loading the OS? It used to be that Sun didn't support very many 3rd party controllers on their x86 platforms, but I'm wondering if that's changed.

Thanks for your comments...

                             Art


and buy a $240 per year support contract in case you ever had an issue, otherwise just download the free driver and security patches and have a go. Just my two cents..no more Solaris talk from me.

Robert Mullenax
CSBF Meteorology





-----Original Message-----
From: ldm-users-bounces@xxxxxxxxxxxxxxxx on behalf of Arthur A. Person
Sent: Mon 10/8/2007 12:10 PM
To: Pete Pokrandt
Cc: ldm-users@xxxxxxxxxxxxxxxx; support@xxxxxxxxxxxxxxxx
Subject: Re: [ldm-users] Best linux file system for data on large raid5array?

Pete,

I've read good things about Sun's zfs... doesn't ever have to be fsck'd
which, to me, is the scariest thing about n-TB systems.  I'm getting ready
to try one of these in real life so I can't say anything about it from
experience.  It's downside might be that it's proprietary (you have to run
Solaris) and it seems to want to run its own software raid... I don't know
whether it would make sense to run it on top of a hardware raid system or
not.  http://www.opensolaris.org/os/community/zfs.


                          Art


On Mon, 8 Oct 2007, Pete Pokrandt wrote:

All,

What filesystem type are people using for data storage on linux?

I have a 5+ Tb archive that's sitting on a hardware raid5, using
reiserfs (Reiserfsprogs-3.6.19 on CentOS), and just recently I started
getting hard machine crashes when trying to write to that file system. I
did a reiserfsck --rebuild-tree on it (since a -check reported that I
needed to) and now about 1/5 of the data that was on it is either gone
or in the lost+found directory named with inode names.

This is the second time now that I've had a reiserfs file system go
kablooey on me.

I'm considering toasting the whole thing and rebuilding with a different
file system type, but I'm not sure what is most reliable/best
performance for this kind of usage. It's a combination of lots of large
files (i.e. GRIB/GRIB2 model data files and gempak of the same) and also
lots of smaller files, i.e. nexrad level 3, lots of small files in a
bunch of directories.

I've read that ext3 (linux default) is extremely stable but can be slow.
Other choices would be jfs, xfs, others??

Any suggestions or experiences would be appreciated.

Thanks!

Pete

--
+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
^ Pete Pokrandt                    V 1447  AOSS Bldg  1225 W Dayton St^
^ Systems Programmer               V Madison,         WI     53706    ^
^                                  V       poker@xxxxxxxxxxxx         ^
^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
^ University of Wisconsin-Madison  V (608) 262-0166 (Fax)             ^
+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+

_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe,  visit: 
http://www.unidata.ucar.edu/mailing_lists/



Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email:  person@xxxxxxxxxxxxx, phone:  814-863-1563
_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe,  visit: 
http://www.unidata.ucar.edu/mailing_lists/



Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email:  person@xxxxxxxxxxxxx, phone:  814-863-1563


  • 2007 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the ldm-users archives: