4

As of the current (3.10+) Linux kernels, is it supported to make a btrfs RAID 0 out of SSDs the root mount and boot to it? This comprises several issues:

  • Does the RAID 0 with SSDs support TRIM/discard blocks? (According to docs, definitely supported for a single drive but unclear for RAID)
  • Does btrfs block alignment work properly in a RAID 0?
  • Will grub/grub2 be able to boot to a btrfs RAID 0? (Seems like grub2 only but would like confirmation)
  • More generally, how stable and well-supported is this configuration right now? Does anyone use it?

Related: https://serverfault.com/questions/307397/verify-trim-support-with-btrfs-on-ssd

2
  • When you say RAID, are you meaning the RAID capabilities of BTRFS, or softRAID (mdadm)?
    – dawud
    Commented Sep 19, 2013 at 6:41
  • @dawud of course the btrfs RAID. Having a RAID-aware fs definitely allows more features than a RAID-agostic one. Additionally I'm pretty sure mdadm doesn't support TRIM over RAID 0.
    – Andrew Mao
    Commented Sep 19, 2013 at 6:43

2 Answers 2

1

Discard works fine, block alignment is nothing special, and btrfs multi-device support (profiles: raid0 raid1 raid10 dup) has been there since the beginning. Be prepared to use a recent kernel nonetheless because the Btrfs developers don't do a lot of stable backports.

You should use a separate /boot; Grub supports most Btrfs features (including compression and the above raid levels), but Btrfs recently added skinny extents which you will want to enable.

1
  • I'm specifically asking about all of the above at the same time, instead of just separately. Can one get discard properly when using RAID 0?
    – Andrew Mao
    Commented Sep 20, 2013 at 14:32
0

Actually i am using Grub2 on a 2xNVME in GPT mode with this setup (without EFI at all):

  • NVME0 & NVME1: MBR with my main Grub2 (where grub.cfg is manually typed by me) allows me to select what to boot from.
  • NVME0 & NVME1: Primary Partition 1 (BIOS GRUB) for Grub2 stage2.
  • NVME0 & NVME1: Primary Partition 2 (BTRFS Raid1 on data & metadata) for Grub and ISOs i want to boot from.
  • NVME0 & NVME1: Primary Partition 3 (BTRFS Raid1 on data & metadata) for Linux 64 Bits, with another GRUB2 installed on Partition Boot Record (grub.cfg configured by Linux distro)

So at boot time:

  1. MBR is read (main Grub2 code)
  2. GRUB_BIOS partition is used to load stage2 of Grub2
  3. BTRFS Raid1 Primary Partitions #2 are mounted and readed, so Grub2 can read its grub.cfg
  4. Main Grub2 menu on screen where i can select to isoloop boot form a lot of ISOs, or also i can choose to boot Linux
  5. I select to boot Linux
  6. BTRFS Raid1 Primary Partitions #3 are mounted
  7. Partition boot code is loaded from BTRFS Raid1 Primary Partitions #3
  8. Another Grub2 mounts BTRFS Raid1 Primary Partitions #3 and are readed, so that other Grub2 can read its grub.cfg
  9. Optionally another Grub2 menu appears or not (depending on keys pressed)
  10. The Linux loads its files and boot

Distribution of data:

  • NVME 0 & 1 MBR (Grub2 code)
  • NVME 0 & 1 MBR (Partition 1), small 8MiB for Grub2 stage2 code
  • NVME 0 & 1 MBR (Partition 2), 16GiB BTRFS with data & metadata in Raid1 for Grub2 files (like grub.cfg) and for ISOs files
  • NVME 0 & 1 MBR (Partition 3), 120GiB BTRFS with data & metadata in Raid1 for root '/' with /boot as directory and with another Grub2 installed on the PBR (not MBR) ... to install that second Grub2 i use the GParted grub-install (aka, /dev/sd$#, not /dev/sd$), since the Linux grub-install has a different Grub2 that does not fit and do not allow to install it onto the partition, always ask for MBR (aka, grub-install /dev/sd$, not /dev/sd$#)

I had also tried the same structure on 6xSSD with BTRFS on Raid0, Raid1 and Raid10 for all, and all worked.

Since with NVME i can only plug two, i opted for BTRFS Raid1, so any scrub can detect and fix the errors (if second copy is not also bad).

When i was booting with Six SSD (all single bit per cell and Sata III capable of sustain write speed of 550MiB/s each) i opted for BTRFS Raid10, to improve speed (up to a little more than 3GiBytes per second sustained writes).

With the two NVME i get the same speed as with six SSD just because one of them is on a PCIe 2.0 x4 slot, the other NVME is on a PCIe 3.0 x4 slot, and can go two times faster but i preffer redundancy at 3GiBytes/s than any BitRot and data is lost on a 6GiBytes/s.

When i had time i have a plan to create a BTRFS Raid10 over all i have (two NVME and 12 SSD, 6 on motherboar, 6 on extra PCIe controller, all without any bottleneck) but i need to plan it whell to balance speed correctly, most probably would be:

  • LVM Raid 0 on the Six SSD on motherboard
  • LVM Raid 0 on the Six SSD on PCIe controller
  • BTRF RAID10 on 2xNVME + LVM Raid 0 on motherboard + LVM Raid 0 on PCIe controller

First test, i got Grub2 isoloop to boot GParted correctly and sustained write speeds of more tha 11GiBytes/s, yes the full size two DVD's (2x4.7GB) per second, more than one double layer DVD per second and with redundancy... cost? better not talk about 12 SSD + 2 NVME high specs costs!!! (> 1000 €) !!!

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .