2

For the last 15 years I was running MDADM(Raid6)+EXT4 under Ubuntu Server on smaller arrays <25Tb. Typically, all my arrays grow over the years, so I was starting with 4-5 drives and ending around 8-10 drives - mdadm makes this very smooth.

Today I am building my next NAS system, with 10x20Tb drives which is supposed to work for the next 10 years. Surely, MDADM+EXT4 will work, but after working with TrueNAS/ZFS at work (where I like its background checks) I am wondering if there are other home-friendly open-source options with better tolerance to random errors while maintaining flexibility of MDADM.

I cannot go TrueNAS and/or pure ZFS because apparently they cannot expand/rebuild redundant arrays while maintaining full redundancy.

What options do I have to have these background consistency checks, and random bad sector recovery capability in addition to 2 spare disks? I.e. I would like to have capability to survive 2.5 disks failure (2 complete failures, and 1 disk with random bad sectors). Metadata/checksums will be stored on separate server SSD. My new NAS does have ECC memory.

  1. Is ZFS on top of MDADM a thing? This way I hope it is possible to expand it and have ZFS consistency checks.
  2. Are there any other robust ways to add data integrity/random error redundancy layer for the array?
  3. Any other robust suggestions?
6
  • Are you sure the statement about ZFS expansion is true? I have heard that you could not expand an existing vdev, but not that you couldn't add a new vdev alongside the existing one (and in any case it seems like they added raidz reshaping in 2021)... Commented Jun 26 at 11:47
  • @grawity_u1686 It seems reshaping still has this weird quirk that storage efficiency is not improved immediately after adding a disk, and require to overwrite all data (which in read-heavy NAS is likely never going to happen). In MDADM you are always at peak storage efficiency, but reshaping is long. Commented Jun 27 at 16:50
  • I'm using Unraid and it works awesome. I tried turenas but it lacked proper docker/VM support, and Truenas-SCALE felt too grandiose, while built on FreeBSD ( which is kinda problematic for a linux newbie like me )
    – Netan
    Commented 2 days ago
  • Can you please explain "apparently they cannot expand/rebuild redundant arrays while maintaining full redundancy."? Do you mean "High availability"? array will stay redundant if it has parity. HA means it'll be available even when there's a problem, thanks to a secondary server ( which, of course, will cost like the first one haha ). In unraid I can do whatever, while keeping the parity, but the array won't be available during formatting of a newly-added/replaced HDD.
    – Netan
    Commented 2 days ago
  • Not directly an answer to your question, but one additional thing I recommend you look into is the "parchive" parity format which lets you create parity files that are good for protecting against bit rot of static data you won't be changing. There are tools like Multipar that will create the "par2" files that live next to the file, then if they ever get corrupted you can use the parity files to repair the original. But the cool thing is the parity files will still work even if the parity files themselves are damaged. Any of the parity files can repair any part of the original file.
    – ThioJoe
    Commented 2 days ago

0

You must log in to answer this question.

Browse other questions tagged .