Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] ZFS vs. Btrfs

> From: at [mailto:discuss-
> at] On Behalf Of Tom Metro
> Rich Pieri wrote:
> > The center of this star configuration is a Debian server: the HP N40L
> > discussed earlier this year. Data is on a Btrfs volume with mirrored
> > data and metadata.
> You started out using ZFS on that server, right?
> What were your reasons for switching to Btrfs and how have your
> comparative experiences been?

Obviously, I can't answer for Rich.  But I have deployed both ZFS and BTRFS, and I can comment on my experiences:

My latest BTRFS experience was about 9 months ago, and I concluded by reformatting with EXT4.  I have the belief that btrfs will eclipse both ext4 and zfs someday, so I am due to revisit now, since a significant chunk of time has passed.  The reason I reformatted ext4 was twofold:  Lack of stability, and lack of features.  The first thing I noticed was lack of features.  Snapshots are writable, and there's no way to change that.  Someday (perhaps already by now) if you want a snapshot to be read-only, you need to set a quota of zero on the snapshot volume, but at the time, quotas were not yet implemented.  I'm talking about the latest ubuntu distribution at the time, which I suppose must have been 11.10.  While I was in there looking ... I forget where I looked to see this ... Just like the feature flags on your CPU that indicate whether you support x86_64 and VT and so forth, there are feature flags on the filesystem that indicate whether it supports quotas and stuff like that.  I had to look at this list to determine that btrfs didn't support quotas, but while I was looking there, the list of stuff that ext4 supported and btrfs didn't support was kind of astounding.  But I forget now.  Maybe somebody else here knows how to look that up again.

Some of the most important zfs vs btrfs features are implemented, and solid:  Checksumming, filesystem expansion, write aggregation, scrub.  Some are not implemented in btrfs: any analogous equivalent to SSD cache / log, or any equivalent to "sync=disabled."  But a few features are implemented in btrfs that zfs users have been begging for, for years:  ability to remove devices, shrink pools/volumes, and fsck.  Not to mention forward compatibility.

btrfs is under active development.  zfs development is ... well ... it has a development disadvantage.

I didn't really mind any of those missing features - The only missing feature I really cared about was "zfs send" or "btrfs send" equivalent.  At the time, it was not yet implemented, but being worked on.  Today, I have the belief it's implemented, but I haven't tried it yet, and I'm not sure what the limitations are.

The fatal problem that made me reformat ext4 was stability.  It goes like this:

The server I built was the latest ubuntu, for the purpose of running avahi and netatalk, to be an apple timemachine server.  It was a new experimental setup to replace our apple server, so I didn't have any solid baseline comparison to draw any comparison against.  The new server was unstable.  I typically had to reboot it on a weekly basis, because apple users said time machine started to complain at them.  I thought maybe it was ubuntu vs dell NIC driver compatibility, or avahi immaturity, or something like that.  I tried a whole bunch of things, and installed a cron job to reboot the server periodically without my interaction.  (The cron reboot greatly reduced the incidence of users complaining.  But it was very annoying to me as a sysadmin.)

One day, the server crashed, and I got on the console to see if I could learn anything, and I saw some strange behavior that suggested to me, a problem with the filesystem or storage.  I had already done everything I could think of to stabilize the storage, so I decided to simply take the server out of production for a day or two, to migrate all the data to external storage, reformat ext4, and bring the data all back into local storage.  I didn't expect it to do any good, but I needed to eliminate the variable.  To my surprise, after doing this, there was never another crash, and after a few weeks I removed the cron reboot, and the server has been in stable production ever since.

Like I said.  That whole thing started about 9 months ago, and the reformat was about 6-7 months ago.

Now there are two more recent versions of ubuntu, not to mention fedora and what you might compile yourself if you feel so inclined.  So like I said, I'm due to revisit, but over the years, the btrfs development has been slow to catch up to zfs, and I've been through half a dozen cycles of this experiment already.

BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!

Boston Linux & Unix /