Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] btrfs



> From: discuss-bounces+blu=nedharvey.com at blu.org [mailto:discuss-
> bounces+blu=nedharvey.com at blu.org] On Behalf Of Derek Atkins
> 
> Thank you for the detailed description.  Could you give (or point me to)
> a brief description of how ZFS's RAID differs from these configurations?

I think it's probably all in man zpool.  You can use devices directly (concatenate), which will in most use cases result in the ability to use multiple devices for parallel performance, for stripe-like performance and concat-like flexiblity.  Or you can use vdev's in place of raw devices.  A vdev can be either an N-way mirror, in which case, the used blocks are mirrored on all the devices in the mirror.  Or a vdev can be raidzN, where raidz1 has the redundancy to survive a single device failure, and behind the scenes, is implemented similar to raid-1e.  Raidz2 and raidz3 use a more complex error correction code ... somebody said reed solomon, but I don't know precisely.  Raidz2 and raidz3 have redundancy to survive 2 or 3 concurrent device failures.

You can concatenate devices and vdev's, but it is only recommended to concat like-type devices.  Just a bunch of nonredundant disks, or just a bunch of mirrors, or just a bunch of raidzN...




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org