Fun with Btrfs on Ubuntu

tl;dr I really like btrfs. Scroll to the bottom to see what I learned.

Some while back I needed some extra space at home for backups and media storage. I wanted a tidy solution which would grow with my requirements, so I didn’t want a bunch of hard drives hanging off USB ports. I ended up buying a first generation Drobo, threw a couple of disks in it and attached it via USB. I had it for a good couple of years but I was never entirely happy with it.

For various reasons (poor native Linux support, slow device, maintenance fee required for firmware updates) I ended up getting rid of the Drobo and looked for another solution.

My requirements were pretty simple and reflected what I already had with the Drobo. I don’t think these were “moon on a stick” requirements.

  • Network attached storage device with ability to grow and shrink capacity easily / dynamically
  • Not physically huge or ugly as it was going to be on my desk
  • Not a power hungry device
  • Easy to administer and setup
  • Bonus: Ability to run additional software on the device (e.g. DNS server, DynDNS daemon, media server)
  • Bonus: Runs Ubuntu

Around the same time, the low-cost, small form-factor HP Microserver became available, which seemed to fit all of my requirements, so I bought one. I transferred the disks from the Drobo to the HP Microserver and formatted them. Initially I tried using Linux MD RAID, but soon figured this couldn’t easily grow and shrink.

My friend Hugo had been waxing lyrical about Btrfs for some time and I’d been meaning to try it. So I thought now would be a good time.

Hugo summed btrfs up for me, tweet-sized.

“14:43 <@darksatanic> btrfs has checksums, compression, online resize, native RAID, online disk mgmt, error recovery, subvolumes, snapshots and super CoW powers.”

Btrfs has some of the features I’d come to enjoy on the Drobo. The ability to easily grow/shrink the filesystem being the most important to me. I’d heard horror stories from others saying Btrfs was unreliable, and knew it has yet to have a stable release. However as this was a home server containing data I could obtain elsewhere I figured it would be fun to try it. If everyone avoids it because it’s unstable I figured nobody would ever find the bugs to get it stable 😉

So on April 21st 2012 I went for it, and it was good.

root@homeserver:~# btrfs fi df /srv
Data, RAID1: total=1.08TB, used=1.07TB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=164.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=2.75GB, used=1.77GB
Metadata: total=8.00MB, used=0.00

root@homeserver:~# btrfs fi show
Label: none uuid: 981b32ab-95ad-4873-8f14-820748a18ac7
Total devices 4 FS bytes used 1.08TB
devid 4 size 1.82TB used 0.00 path /dev/sda
devid 1 size 1.82TB used 736.27GB path /dev/sdc
devid 2 size 1.82TB used 736.76GB path /dev/sdd
devid 3 size 1.82TB used 736.51GB path /dev/sde

Setting up the btrfs volume with 3 disks was a cinch, adding a fourth (when it arrived) was also stupidly easy. There’s a nice wiki page detailing how to use btrfs with multiple devices which I used to get going.

It was as simple as using fdisk to list out the device names and mkfs.btrfs to create the RAID1-like volume:-

mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde

I added a line to my /etc/fstab and mounted it :-

/dev/sdc /srv btrfs noatime,nodiratime 0 0

Job done!

It ran pretty smoothly over the next few months.

I was running low on space in September 2012 so bought a Sans Digital TowerRAID TR8M+B 8 Bay eSATA Array and via the supplied PCIe eSATA card, added it to my Microserver and started adding disks it.

Sometimes it would take ages to boot up, which was apparently because btrfs.fsck was running on boot unnecessarily. I moved btrfsck out of the way as a brute force way to stop it running and all was fine. I was

Soon after that I managed to discover that eSATA cables are easy to knock out of the back of the MicroServer, by doing exactly that. Yes, I am an clumsy oaf. All was fine. I had no data loss, despite 8 of the 12 disks attached to the server had disappeared. btrfs did the Right Thing when I restarted the server and array – after moving both from my desk to somewhere I’m less likely to flail my around uncontrollably.

In around November 2012 Hugo reminded me that I should run a ‘scrub’ (An online filesystem checking tool. Reads all the data and metadata on the filesystem, and uses checksums and the duplicate copies from RAID storage to identify and repair any corrupt data) my system now and then. I checked, and oops, I’d not run it for over 6 months.

$ sudo btrfs scrub status /srv
scrub started at Mon Apr 30 18:19:15 2012 and finished after 19652 seconds
total bytes scrubbed: 6.42TB with 0 errors

Time to scrub!.

$ sudo btrfs scrub start /srv
scrub started on /srv, fsid 981b32ab-95ad-4873-8f14-820748a18ac7 (pid=30510)

$ sudo btrfs scrub status /srv
scrub status for 981b32ab-95ad-4873-8f14-820748a18ac7
scrub started at Fri Nov 16 09:35:53 2012, running for 220 seconds
total bytes scrubbed: 68.70GB with 0 errors

More recently one of the disks in my MicroServer went offline. I thought it might be broken but it turns out it was just poorly seated in the server. I unmounted the volume, pulled all the disks out (to label them) and re-seated them all, re-mounted and ran another scrub. All fine, no data loss.

About a month back my server became unresponsive. I rebooted and it was fine, later (when running a scrub) it became unresponsive again. I was all set to put it down to hardware failure. Then I mentioned it in passing to Hugo and he remembered a btrfs bug which causes the scrub operation to eat all the memory. That was it! Conveniently that patch made it into Linux Kernel 3.11-rc6. I was ready at this point to start building my own kernel with that patch applied, but this isn’t necessary because the lovely Ubuntu Kernel team have the build done for me, so I grabbed those debs, installed and rebooted. Now my scrubs don’t run out of RAM. Huzzah!

Lessons learned:-

  • btrfs is stupidly easy to setup
  • Run a modern kernel (at the time of writing I’m using 3.11.0-031100rc6-generic from on Ubuntu 12.04)
  • Move the fsck tool out of the way
  • Use the latest btrfs tools (I’m using 0.19+20120328-4ubuntu1 on Ubuntu Server 12.04)
  • Expanding the volume online is easy when you buy more disks (btrfs device add /dev/sdz /srv)
  • It’s really easy to drop a disk out of the volume online (btrfs device delete /dev/sdz /srv)
  • Scrub regularly (I use this in my crontab 0 1 15 * * /sbin/btrfs scrub start /srv)
  • The people in #btrfs on freenode are very friendly. (especially dark{thing|satanic|ling} [Hugo])
  • If a disk appears to have died, or like an oafish klutz I knock the eSATA cable out, DON’T PANIC! but check the wiki and ask for help in #btrfs
  • At any point it could all go horribly wrong and lose everything. This hasn’t happened to me yet, so meh.
  • I love btrfs.

23 thoughts on “Fun with Btrfs on Ubuntu”

  1. I love btrfs too, but it is not without its share of problems. I have used it as the root fs on my dev machine for quite a while. It has corrupted itself two or three times in the last year. Each time it has been so bad that btrfsck has not been able to fix it. Remember to keep proper backups and especially remember that raid is not a backup.

    1. Keeping snapshots tends to help; if you corrupt your volume, chances are your most recent snapshot is still perfectly fine, and switching to it is trivial. Snapshots are also cheap and with some scripts, easy to manage.

      Even without snapshots there’s quite a bit you can do. I once recovered from some weird corruption (mounting a brfs filesystem on an older kernel is not a good idea) by running a scrub and finding out which files were broken and then manually rsyncing everything over to a fresh subvolume. Luckily no important files were lost.

      1. Nope, snapshots don’t protect against corruption in the crc tree or the extent tree. Which I have encountered a few kernels ago.

        Time to update the old saw: RAID aren’t backups, and neither are snapshots.

  2. It is interesting that you say to use the latest btrfs-progs, but then mention that you are using ones that are over a year old! You should definitely be using the tools from Chris Mason’s git tree. That branch is kept as a stable/usable snapshot, while real development goes into other branches (most notably, Josef’s branch tends to have the latest stuffs).

    @Jussi, typically the recommended path of recovery is not to dive straight into btrfsck. Recently on the linux-btrfs mailing list, Hugo Mills posted what he considered to be the current correct path to recovery. Btrfsck was towards the bottom of this list. I guess eventually there will be the integration of all the recovery tools into one, but for now they are kept somewhat separate for the purposes of their development.

    BTW, you can actually already use ‘btrfs check’ in place of btrfsck.

  3. Before I forget, noatime always implies nodiratime. And btrfs is awesome. Even big projects like Ceph encourage it’s usage at some extent.

  4. When do you think this can become default or recommended in some way so more people can start reporting bugs ?

    Looks like is taking forever ;/

    >At any point it could all go horribly wrong and lose everything. This hasn’t happened to me yet, so meh.

    Isn’t this why it has a snapshot/backup feature, so your data can be more secure?

    On the server would be handy but on the desktop too since from polls about 20 to 30% encounter upgrade issues (from moderate to having to reinstall) and other stuff that would be great to revert. 🙂

    1. Snapshots aren’t backups — they’re still sharing most of the data and metadata with the thing they’re a snapshot of. So if something goes wrong, you’ve lost the original and the snapshot.

      Where snapshots are useful for backups is that they’re atomic, so you can have a consistent state from the FS to make your backup — make a snapshot, rsync that to somewhere else. You can also use send/receive to shop the differences between snapshots to another system efficiently (much more efficiently than rsync).

        More from author
        1. You can keep backups elsewhere. Snapshots are an in-filesystem feature, so you can’t directly make a snapshot to a different device — you’d make a snapshot on the current FS to have a stable tree to work from, then copy that elsewhere (e.g. with rsync or btrfs send/receive).

            More from author
    1. It’s worth pointing out that btrfs’s RAID-1 is always two copies of data, not more. So a traditional RAID-1 of 4 × 2TB devices would give you 2TB of usable space with four copies. btrfs would give you 4TB of usable space with two copies only. (But it can work with different-sized devices, and odd numbers of devices as well).

        More from author
      1. please help me.

        #umount /dev/sdb1
        # fsck -f /dev/sdb1
        fsck из util-linux 2.20.1
        e2fsck 1.42.8 (20-Jun-2013)
        data500: 144653/30531584 files (0.9% non-contiguous), 102659367/122096384 blocks

        # btrfs-convert /dev/sdb1
        No valid Btrfs found on /dev/sdb1
        unable to open ctree
        conversion aborted.

        Ubuntu 13.10
        btrfs-tools 0.19+20130705-1

Leave a Reply

Additional comments powered by BackType