• Hey Guest!
    British Car Forum has been supporting enthusiasts for over 25 years by providing a great place to share our love for British cars. You can support our efforts by upgrading your membership for less than the dues of most car clubs. There are some perks with a member upgrade!

    **Upgrade Now**
    (PS: Upgraded members don't see this banner, nor will you see the Google ads that appear on the site.)
Tips
Tips

Now for something a bit...

Nunyas

Yoda
Country flag
Offline
Geeky...

The company I work for is a web hosting company, and their 'shared-hosting' environment is one of the most unique, and in theory the most powerful around (though I admit has many pit-falls/short comings). It's a clustered environment, w/ delineated storage segments. Each storage segment is run by Solaris w/ ZFS for the file system. ZFS is a fancy schmancy file system that allows you to grow the size of the FS "dynamically" just by adding new disks when you need more space. It also has built-in capacity for redundancy. Though, I'm not making use of redundancy in my personal installation because the data stored in my ZFS array isn't that important.

ZFS is similar to RAID 0 in that it is a stripped array of disk drives. The stripping improves data throughput of the array. The downside is if you do not have redundancy enabled, you stand to loose a lot of data if 1 drive fails. I've had 1 drive fail to start in my external RAID enclosure; it resulted in the loss of about 4GB of data, which sounds bad, but when there was about 2TB of data in the array, I lost less than 1%. So, not really that bad in a system where the data isn't critical.

Anyway, I built my ZFS array to get a better understanding of the file system, and its use. I built it w/ tax returns and at the time NewEgg had a "special" on 2TB drives (~$70 each) and drives that I already had available. So, building it didn't set me back as much as one would normally expect.

One thing I've noticed with ZFS in a Linux environment is due to the way that Solaris has licensed ZFS, Linux does not have any kernel modules that are capable of reading this type of file system included within the kernel source. So, the most common way around this for end users is to run ZFS on top of FUSE. In my experience, this tends to load up the OS/CPU during large file transfers into the array; I hit loads of 2+ on a quad-core system w/ 8GB of RAM installed during large file transfers. The other option is to use a 3rd party kernel module for ZFS, which requires recompiling a kernel...

The following is ~not~ the full output of my 'df' command. I've narrowed the output to the point of interest :wink:
<div class="ubbcode-block"><div class="ubbcode-header">Code:</div><div class="ubbcode-body ubbcode-pre" ><pre>max@home:~$ df -h
Filesystem Size Used Avail Use% Mounted on
storage/files 9.9T 2.5T 7.4T 25% /storage/files</pre>[/QUOTE]

<div class="ubbcode-block"><div class="ubbcode-header">Code:</div><div class="ubbcode-body ubbcode-pre" ><pre>max@home:~$ sudo zpool status
pool: storage
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
disk/by-id/scsi-SATA_ST32000542AS_5XW13E49 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST32000542AS_5XW168HH ONLINE 0 0 0
disk/by-id/scsi-SATA_ST32000542AS_5XW19132 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST32000542AS_5XW1960W ONLINE 0 0 0
disk/by-id/scsi-SATA_WDC_WD20EADS-00_WD-WCAVY0657013 ONLINE 0 0 0
disk/by-id/scsi-SATA_WDC_WD10EACS-00_WD-WCAU41040824 ONLINE 0 0 0

errors: No known data errors</pre>[/QUOTE]

Yeah, 6 drives in the array. It started out w/ just 5, but after I wiped the 1TB drive clean (the SATA_WDC_WD10EACS-00_WD-WCAU41040824 drive) I added it into the pool (array). ZFS allowed me to add the drive into the pool w/o remounting the "drive" and w/o having to reboot the computer.

It does have quite a bit of overhead though. ~10% of the total drive space has disappeared to file system overhead. If you add up the space of the drives, I should have around 11TB of disk space, but the space available for use is 9.9TB. Some of that overhead is going to drive parity and journaling. So, I can forgive it for having such high overhead.
 
I get the idea that you're learning the Solaris ZFS intricacies, just not sure an enterprise Linux kernel and, say, RAID6 wouldn't do the same things as well. With the added bene that using ext3 (or ext4) and distributed parity is a bit more "common"?

A few of the comparisons I found with ZFS vs. ext3/4 seem to suggest each has pros n' cons... no clear: "Ah-HA! This'n is MUCH better."

You mention a third party Linux kernel module. Why not compile a kernel with that knitted into it? Is there a real need to upgrade the kernel down the road (The old: "If it ain't broke..." thingie)? The FUSE scheme sounds like it sucks, IMO.

The hardware you have looks like it should handle about anything you throw at it, BTW. :wink:

...and you're havin' too much fun, dude. :jester:
 
DNK said:
Uncle! :crazy: :crazy: :crazyeyes:

I'll second that. I do however know who the first programmer was. :wall:
 
:iagree:
 
Sittin' here now, drinkin' Rob's "first cuppa". :wink:


All I gotta say is:

<span style="font-weight: bold"><span style="color: #CC0000"><span style="font-size: 23pt">RUTABAGA!!!</span></span></span>
 
DrEntropy said:
I get the idea that you're learning the Solaris ZFS intricacies, just not sure an enterprise Linux kernel and, say, RAID6 wouldn't do the same things as well. With the added bene that using ext3 (or ext4) and distributed parity is a bit more "common"?

A few of the comparisons I found with ZFS vs. ext3/4 seem to suggest each has pros n' cons... no clear: "Ah-HA! This'n is MUCH better."

You mention a third party Linux kernel module. Why not compile a kernel with that knitted into it? Is there a real need to upgrade the kernel down the road (The old: "If it ain't broke..." thingie)? The FUSE scheme sounds like it sucks, IMO.

The hardware you have looks like it should handle about anything you throw at it, BTW. :wink:

...and you're havin' too much fun, dude. :jester:
Well, I think the "big" thing that ZFS brings to the table is dynamically growing the storage pool as disk drives are brought in, and being able to add to the pool 1 disk drive at a time without having to rebuild the pool each time you add a drive. This gives me the ability to add a 5TB drive (not that I will) tomorrow without losing data and without having to rebuild the storage array. There's no need to format a blank drive when you add it to a ZFS pool.

It also gives me the ability to add drives of an non-determinate size. So, if I did add a 5TB drive to the pool, I would have 5 2TB drives, 1 1TB drive, and 1 5TB drive.

Granted, if I want full redundancy, I'd have to add pairs of equally sized drives.

I could be mistaken, but I do not think that RAID will allow you to build stripped arrays using drives of unequal size, nor can you randomly add disks to the array once it's been built.

As a proof of concept to this ability with the ZFS pool, I started w/ data that was spread across 2 EXT3FS drives: 1 2TB and 1 1TB. I built the pool w/ 4 2TB drives initially, copied the data from the two ext3fs drives to the zfs pool (took nearly 8 hours to copy 2TB data :eeek: ), performed a diff between the two directory trees to ensure no data loss occurred, and then wiped the two ext3fs drives and added them to the ZFS pool. When I say "wiped", I really mean deleted the partition tables.

And, yeah, a kernel module would be best. The old if it ain't broke approach to Linux seems to be fading w/ the newer distributions though. It's this parting from the if it ain't broke mantra that has me second thinking Ubuntu. Even w/ their "LTS" releases, you're forced into upgrading the full OS every 2 years.

I've been semi-pining for Slackware lately, with its prepackaged binaries and lack of dependency crazed package system. Building from source on a packaging system dependent distribution never goes quite right, IME.
 
:lol: You kids just go on an' play outside fer a minnit. Unca Rob an' me have somefin' to talk about. :jester:


nunyas said:
I could be mistaken, but I do not think that RAID will allow you to build stripped arrays using drives of unequal size, nor can you randomly add disks to the array once it's been built.

yup. my bad. Didn't think on/about the size differences you have. Ooops.

I'll blame it on coffee deficiency.
 
DrEntropy said:
yup. my bad. Didn't think on/about the size differences you have. Ooops.

I'll blame it on coffee deficiency.
I thought you could build a RAID 0 with different sizes. The array just defaults to the smallest size drive. So if you had 2 2TB drives and 1 1TB drive, the array would be 3TB. Big waste of space.
 
AngliaGT said:
JPSmit said:


It's a "secret code" - kind of like when Adults
spell out words while around small Children.

- Doug

trust me they don't need to spell it out for me to miss it :crazyeyes:
 
this is the only thing I understood

The stripping improves any kernel :devilgrin:
 
that's something else entirely different O_O
 
Back
Top