Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On 01/07/2013 03:12 PM, Rich Pieri wrote: > Another fundamental difference is how the two handle mirrored data and > metadata. > > ZFS's mirroring is built on conventional plexes. In a simple 4 disk > array, pairs of physical devices are bonded as single virtual devices > and then these vdevs are joined to form a larger pool. Anything written > to one disk in a pair is written to the other disk in the pair. > Striping is performed across vdevs within the pool. > > Btrfs mirroring is built entirely on file extents. In a simple 4 disk > volume, all four disks are attached to a single volume with mirrored > data and metadata. Any extent written to one device will be written to > another device based on a balancing algorithm within the file system > driver. > > This abstract approach lets you do something that seems weird at first: > mirror sets with odd numbers of devices. To illustrate the idea, > imagine a Btrfs volume with three 1TB disks. In raid0 (striped) you > have 3TB capacity, and writing 1TB of data will take 1TB of that > capacity leaving 2TB. In raid1 (mirrored) you have 1.5TB capacity, and > writing 1TB of data will take 2TB of that capacity leaving 0.5TB. Every > extent is replicated on 2 physical devices so it is still resilient to > a disk failure and can still self-repair corrupted data. > > Btrfs raid10 requires at least 4 devices. > In my mind the important issue is resistance to drive failure. What happens in both ZFS and Btrfs in the case of a power failure. -- Jerry Feldman <gaf at blu.org> Boston Linux and Unix PGP key id:3BC1EB90 PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66 C0AF 7CEA 30FC 3BC1 EB90
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |