7

I have a Btrfs RAID 6** array, and I currently have enough free space to rebuild it to avoid the data loss bug that currently affects the filesystem with this RAID level.

My idea is to rebuild the array using mdadm and format the md device with Btrfs. But I have a few questions:

  • is btrfs over mdadm RAID 6 reliable ?
  • will the bitrot protection and snapshots keep working ?
  • are there some drawbacks to this setup ?
  • are there better options ?

3 Answers 3

6

In 2016, Btrfs RAID-6 should not be used.

You can see on the Btrfs status page that RAID56 is considered unstable. The write hole still exists, and the parity is not checksummed. Scrubbing will verify data but not repair any data degradation.

To answer your questions:

is btrfs over mdadm raid6 reliable ?

You want one Btrfs volume to sit on top of md RAID-6, meaning that Btrfs will be unaware of the RAID. This is as reliable as having one filesystem formatted as Btrfs, which is by default a duplicated copy of the metadata and only a single copy of your data.

will the bitrot protection and snapshots keep working ?

Using your proposed setup, Btrfs will detect rotted bits, but it can't fix them because there's only one copy of your data. If something were to happen to your md RAID-6 array, you'd be looking at data loss.

Snapshots would still work, though, but also without repairs during scrubbing.

are there some drawbacks to this setup ?

You'd be trusting md to keep your data intact, but md doesn't know about what your data is―that's Btrfs's job. Btrfs can't repair an inconsistency happening at md's level.

Here's my personal example of what would happen if something were to go wrong with md RAID-6.

are there better options ?

If you're looking for an alternative to BTRFS RAID-6, consider ZFS RAID-Z2, which offers a reliable implementation of RAID-6 that checks the integrity and repairs your data as well as snapshots.

As far as drawbacks for ZFS RAID-Z2, it can't be shrunk or reshaped, and resizing should only be done one disk at a time, which can take a very long time.

See also: Overview of ZFS

3
  • 1
    I know btrfs raid5/6 should not be used, that's exactly why I want to replace it. for zfs, from what I see in the documentation, it's only available in sid ?. What about btrfs over mdadm ? Commented Oct 5, 2016 at 13:44
  • @BenjaminDubois: I've updated my answer to reflect your intentions. I guess the ZFS on Linux documentation is still being updated for Debian. ZFS on Linux is supported on Debian jessie.
    – Deltik
    Commented Oct 5, 2016 at 14:05
  • Thanks for this detailed answer. I will have a look this zfs tutorial. Now I need to rethink the structure of my array because it will grow in a near future. Commented Oct 5, 2016 at 14:27
1

well an raid1 btrfs on top of raid6 mdadm splited in two can be a solution. You can split every hdd in two equal partitions, group one partition per hdd in 6 hdd sets in two raid6 mdadm volumes and then make a raid1 btrfs from this two raid6 mdadm volumes

0

Synology uses btrfs, and looking at their knowledgebase, it looks like they are using mdadm for RAID. As Deltic mentions in their answer, btrfs wouldn't know about the RAID setup and wouldn't be able to recover from it, and I encourage people to follow the links in his comments to about what mdadm is and what it isn't, but I believe that you should be able to use mdadm RAID 5 or 6 to be able to rebuild after a drive failure.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .