Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Wed, 02 Jan 2013 03:01:23 -0500 Tom Metro <tmetro+blu at gmail.com> wrote: > You started out using ZFS on that server, right? Indeed, I did. > What were your reasons for switching to Btrfs and how have your > comparative experiences been? The primary reason is performance. ZFS with FUSE has a lot of CPU overhead and the cores in the N40L aren't beefy monsters to begin with. The secondary reason is that Btrfs is going to become the default file system for Linux in the foreseeable future (it already is for some distributions) and I figured it would be a good opportunity to try it out. I cannot honestly compare performance since I was running RAID-Z with ZFS and mirrored data+metadata with Btrfs. I have a test computer at work that I will use at some point to make valid comparisons. Btrfs has all of the critical features of ZFS. It lacks some of the features like independent ZIL, send/receive, and data+parity (RAID-5). Compression is still all or nothing within a Btrfs pool. And there are some rough spots with the non-Btrfs user space tools (df shows raw storage rather than usable storage). These are coming but development has been slow, it seems. Btrfs is better in some ways. Devices can be removed and volumes can be shrunk, a few others that ZFS users have wanted for ever but never got from Sun. This isn't useful to me for my home server but it is very useful for workgroup and enterprise projects. Btrfs handles removable devices just fine. Mount and unmount like any other file system. ZFS... crashes if you don't properly export the pool first. Kind of a pain, really. I've not ruled out migrating back to ZFS. I previously mentioned using Debian kFreeBSD. That should be a best of both worlds: Debian user space with native ZFS in the BSD kernel. -- Rich P.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |