0

A long story short: my RAID5 array is running degraded. It may have been like this for a while and I don't know the history that led to this point, so I just want to remedy the situation. It would appear that there is a disk missing from the array of 3 x 1TB disks. According to the Disks GUI application there is a 4th disk that shows its Partition Type as being "Linux RAID Auto", Contents: Unknown. So probably this disk has been part of the RAID at sometime, or I tried to add it as a hotswap disk at sometime in the past and failed. I would like to add this 4th disk as a hotswap disk, and have 3 x 1TB disks to give me a total capacity of 2TB.

So please: what is the easiest way to get the array running successfully on 3 disks, plus a hotswap disk?

The results of running sudo mdadm --detail /dev/md0 are as follows:

/dev/md0:
           Version : 1.2
     Creation Time : Thu Apr 20 15:50:19 2017
        Raid Level : raid5
        Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
     Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Apr  2 14:08:37 2022
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : MERLIN:0  (local to host MERLIN)
              UUID : 1d461a20:92a3a092:2308db3c:49fed682
            Events : 31541

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       -       0        0        1      removed
       2       8       49        2      active sync   /dev/sdd1
       3       8       17        3      active sync   /dev/sdb1

If I can remove RaidDevice 1 then presumably the RAID will not run Degraded. How can I remove this device please?

To then add the HotSwap disk I tried sudo mdadm --add /dev/md0 /dev/sde1, but I get the error mdadm: add new device failed for /dev/sde1 as 4: Invalid argument. Any ideas please?

Regards, Stuart

1
  • What does /var/log/syslog show when mdadm emits the Invalid argument error? It should give you a hint why adding the fourth disk failed. Commented Apr 3, 2022 at 14:14

1 Answer 1

0

According to the output of mdadm --detail /dev/md0 this array was originally configured with four 1 TB disks, giving a total usable capacity of 3 TB. One of the disks (RaidDevice 1) has been removed or otherwise failed. So the array is now running without redundancy, and if another disk fails or is removed the array as a whole will fail.

To fix that, you'll have to re-add a fourth disk of at least 1 TB and let the array rebuild, restoring redundancy.

You may be able to reconfigure your array from 4x1TB (3TB net) to 3x1TB (2TB net) as described in this ServerFault article:

https://serverfault.com/questions/528281/rebuild-mdadm-raid5-array-with-fewer-disks

However this is very risky, and perhaps not even supported, on an array that is already degraded. I'd recommend against it. More precisely, I'd never do that without a good backup of the data, and if you have that, it's easier and more reliable to just recreate the array from scratch and then restore the data from the backup.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .