0

I have just grown my RAID-1 mirror in to a RAID-5. After adding a 3rd disk, the array rebuilt successfully, but after restarting the array was showing as inactive, and the new disk (sde) had lost all RAID information.

MDADM Details before restart:

richard@#####:~$ sudo mdadm --detail /dev/md3
/dev/md3:
           Version : 1.2
     Creation Time : Sat Dec 26 14:18:44 2020
        Raid Level : raid5
        Array Size : 27344500736 (26077.75 GiB 28000.77 GB)
     Used Dev Size : 13672250368 (13038.87 GiB 14000.38 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Apr  4 08:27:31 2021
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

              Name : Richard-SRV1:3  (local to host Richard-SRV1)
              UUID : a06f3ee5:0eba2f11:64718dee:a0882bd6
            Events : 220986

    Number   Major   Minor   RaidDevice State
       0       8       80        0      active sync   /dev/sdf
       3       8       64        2      active sync   /dev/sde
       1       8       96        1      active sync   /dev/sdg

cat /proc/mdstat before restarting

md3 : active raid5 sde[3] sdg[1] sdf[0]
      27344500736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/102 pages [0KB], 65536KB chunk

I had added the array to mdadm.conf

ARRAY /dev/md/3  metadata=1.2 UUID=a06f3ee5:0eba2f11:64718dee:a0882bd6 name=Richard-SRV1:3

AFTER RESTARTING

MDADM Detail:

richard@#####:~$ sudo mdadm --detail /dev/md3
[sudo] password for richard: 
/dev/md3:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 2
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 2

              Name : Richard-SRV1:3  (local to host Richard-SRV1)
              UUID : a06f3ee5:0eba2f11:64718dee:a0882bd6
            Events : 220986

    Number   Major   Minor   RaidDevice

       -       8       80        -        /dev/sdf
       -       8       96        -        /dev/sdg

MDSTAT:

richard@Richard-SRV1:~$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10] 
md3 : inactive sdf[0](S) sdg[1](S)
      27344500992 blocks super 1.2

Examining SDE shows:

richard@#####:~$ sudo mdadm --examine /dev/sde
/dev/sde:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

Running mdadm --assemble --scan -v showed that SDE was missing a RAID superblock

mdadm: No super block found on /dev/sde (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sde

I'm not sure what I'm doing wrong, or if the issue is elsewhere. I've re-added sde and re-built the array twice now. Since the first failure I have:

  • Stopped the array mounting in fstab
  • Updated mdadm.conf
  • Resized the filesystem after the array rebuilt

Any advice would be appreciated, I'll be the first to admit I'm not the best at this.

2
  • 1
    Try to regenerate mdadm.conf using the command mdadm --detail --scan.
    – harrymc
    Commented Apr 4, 2021 at 15:33
  • Thanks. I did try that after the array recovered, but it's showing as inactive now, and sde is not showing as part of it. INACTIVE-ARRAY /dev/md3 metadata=1.2 name=Richard-SRV1:3 UUID=a06f3ee5:0eba2f11: 64718dee:a0882bd6
    – rbazley
    Commented Apr 4, 2021 at 18:02

1 Answer 1

-1

Do you have any other operation system or process installed? I saw you are using the raw disks /dev/sd[e-g] in mdadm. This is problematic as there will be no partition table on the drive. Other operation systems might treat the drive as not initialized and create a partition table which overwrites the mdadm head. I would suggest create a partition on these drives, and use the partitions /dev/sd[e-g]1 to prevent interference from other systems.

1

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .