0

I'm using 2-bay drive Synology (implies I'm using linux mdadm software based RAID system).

The motivation for doing so is to lessen the time rebuild RAID-1 array after it goes degraded. I currently have two 4TB drives of class 5400 RPM and it take an entire day and perhaps few more hours to rebuild the entire array. Imagine how much more it would take if I upgrade to 8TB or 12TB!

Say if I have 2 8TB physical drives, I'd like to set up 2 RAID-1s having 4TB each raid array instead of 1 RAID-1 array of 8TB. So that in case it loses integrity, I would only need to sync just 4TB instead of entire 8TB. But the downside would be that if the 2 raid arrays both fail in which case I would have to sync 2 4TB arrays.

  1. Is it even possible to setup 2 4TB RAID-1 arrays on 2 physical 8TB drives?

  2. If it's possible and I have it set up, is it more probable that when integrity failure happens it would happen on both raid arrays than just 1 of the 2 raid arrays?

  3. In a 2 raid-1 arrays, does a failure in one array results to failure of the other array? Because if it does then it will defeat the purpose of having multiple arrays.

  4. Will there be any difference between 2 raid-1 and 1 raid-1 arrays (both running on just 2 physical drives) in terms of stress it would put on the physical drives?

Take note that I intend to create one volume per array which means, having 2 arrays of 4TB means I will have 2 volumes of 4TB each. The two volumes will be independent from one another so as the 2 raid-1 arrays.

Update:

Looks like mdadm actually supports it but Synology, while uses mdadm, doesn't.

1 Answer 1

1

My answers are for MDADM (I don't have that much experience with Synology - and none with 2 drive variants, so YMMV). The short of it is - Yes, this can be done, but there is very little benefit, and its probably not worth the added complexity in the general case.

  1. Not only is it possible to set up 2 x TB arrays, that kind of behaviour is common to have a separate boot and root block device (but not so much in your described scenario)

  2. Loss of integrity is likely to happen on both drives, however this would typically be a moot point because in the scenarios you can recover from, there will be a list of changed blocks, and only those will be resynced anyway. Thats what RAID Superblocks are for.

  3. Failure of 1 array does not immediately imply a failure in another array, however it is likely both will fail at the same time in most scenarios - especially if you get RAID ready disks (which you really need to do) where a failure means replace the hard drive. There are some scenarios where only 1 disk might fail, but I would not focus on those - if 1 part fails, you need to replace the disk anyway.

  4. There will not be any significant difference on stress, but depending on how its done, you could structure it so that one disk (the first one) is faster - if you look into "short stroking", you will find that the outer tracks are about twice as fast as the inner ones, so you could use this kind of setup to have faster and slower partitions.

Again, Not sure how Synology does it, but if you are not concerned about speeds as per (4) above, under Linux you should look at using LVM on top of your MDADM partitions as it will give you more flexibility with partitioning.

4
  • On #2, if that's true, why does it take an entire day or 2 to just resync a 4TB drive? Makes me feel like resyncing does not happen on just changed blocks but on entire disk. Hence, I'm motivated to create multiple raids to shorten syncing. And if I'm going to upgrade to higher capacity, that will just give more incentives in doing so, it might take 2 days or 4 on 8tb to resync which might compromise the other disk due to sustained stress. Commented Jul 15, 2018 at 7:52
  • I wonder if it depends on your failure mode. I've fairly recently had 1tb SSD's in mdadm RAID1 fail, unplugged the drive, plugged it back in, it spent just seconds resyning.
    – davidgo
    Commented Jul 15, 2018 at 8:25
  • 1
    #2 is an optional feature called the "intent log". it has a performance cost when the array is written to.
    – sourcejedi
    Commented Jul 15, 2018 at 8:27
  • @davidgo, I actually tried rebuilding by removing and putting back the drive in the 2nd bay and it was very slow. It took a day and few hours. Commented Jul 19, 2018 at 16:31

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .