Actually i am using Grub2 on a 2xNVME in GPT mode with this setup (without EFI at all):
- NVME0 & NVME1: MBR with my main Grub2 (where grub.cfg is manually typed by me) allows me to select what to boot from.
- NVME0 & NVME1: Primary Partition 1 (BIOS GRUB) for Grub2 stage2.
- NVME0 & NVME1: Primary Partition 2 (BTRFS Raid1 on data & metadata) for Grub and ISOs i want to boot from.
- NVME0 & NVME1: Primary Partition 3 (BTRFS Raid1 on data & metadata) for Linux 64 Bits, with another GRUB2 installed on Partition Boot Record (grub.cfg configured by Linux distro)
So at boot time:
- MBR is read (main Grub2 code)
- GRUB_BIOS partition is used to load stage2 of Grub2
- BTRFS Raid1 Primary Partitions #2 are mounted and readed, so Grub2 can read its grub.cfg
- Main Grub2 menu on screen where i can select to isoloop boot form a lot of ISOs, or also i can choose to boot Linux
- I select to boot Linux
- BTRFS Raid1 Primary Partitions #3 are mounted
- Partition boot code is loaded from BTRFS Raid1 Primary Partitions #3
- Another Grub2 mounts BTRFS Raid1 Primary Partitions #3 and are readed, so that other Grub2 can read its grub.cfg
- Optionally another Grub2 menu appears or not (depending on keys pressed)
- The Linux loads its files and boot
Distribution of data:
- NVME 0 & 1 MBR (Grub2 code)
- NVME 0 & 1 MBR (Partition 1), small 8MiB for Grub2 stage2 code
- NVME 0 & 1 MBR (Partition 2), 16GiB BTRFS with data & metadata in Raid1 for Grub2 files (like grub.cfg) and for ISOs files
- NVME 0 & 1 MBR (Partition 3), 120GiB BTRFS with data & metadata in Raid1 for root '/' with /boot as directory and with another Grub2 installed on the PBR (not MBR) ... to install that second Grub2 i use the GParted grub-install (aka, /dev/sd$#, not /dev/sd$), since the Linux grub-install has a different Grub2 that does not fit and do not allow to install it onto the partition, always ask for MBR (aka, grub-install /dev/sd$, not /dev/sd$#)
I had also tried the same structure on 6xSSD with BTRFS on Raid0, Raid1 and Raid10 for all, and all worked.
Since with NVME i can only plug two, i opted for BTRFS Raid1, so any scrub can detect and fix the errors (if second copy is not also bad).
When i was booting with Six SSD (all single bit per cell and Sata III capable of sustain write speed of 550MiB/s each) i opted for BTRFS Raid10, to improve speed (up to a little more than 3GiBytes per second sustained writes).
With the two NVME i get the same speed as with six SSD just because one of them is on a PCIe 2.0 x4 slot, the other NVME is on a PCIe 3.0 x4 slot, and can go two times faster but i preffer redundancy at 3GiBytes/s than any BitRot and data is lost on a 6GiBytes/s.
When i had time i have a plan to create a BTRFS Raid10 over all i have (two NVME and 12 SSD, 6 on motherboar, 6 on extra PCIe controller, all without any bottleneck) but i need to plan it whell to balance speed correctly, most probably would be:
- LVM Raid 0 on the Six SSD on motherboard
- LVM Raid 0 on the Six SSD on PCIe controller
- BTRF RAID10 on 2xNVME + LVM Raid 0 on motherboard + LVM Raid 0 on PCIe controller
First test, i got Grub2 isoloop to boot GParted correctly and sustained write speeds of more tha 11GiBytes/s, yes the full size two DVD's (2x4.7GB) per second, more than one double layer DVD per second and with redundancy... cost? better not talk about 12 SSD + 2 NVME high specs costs!!! (> 1000 €) !!!
mdadm
)?