0

I arrived home today to find my home server was off (not idea why). Turned it back on again and the raid array was failing to mount.

/proc/mdstat shows:

Personalities :
md0 : inactive sdh[7](S) sdc[2](S) sdd[1](S) sdb[3](S) sde[0](S) sdg[5](S) sdi[6](S) sdf[4](S)
      39069143232 blocks super 1.2

unused devices: <none>

and mdadm --detail /dev/md0 says the raid level is raid0, when it is actually supposed to be raid6.

Looking at mdadm -E on each individual drive. All drives apart from sdg have an array state of AAAAA.AA, an event count of 191578 and state is active.

sdg however, has an array state of AAAAAAAA and an event count of 69286, and state is clean.

Also of note, the array UUID shown in -E is different to the array UUID I have in /etc/mdadm.conf - although this may be due to an issue I had in the past where the machine crashed at the start of a grow, and I had to run mdadm -CR /dev/md0 --metadata=1.2 --raid-devices=5 --level=6 -c512 /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --assume-clean which resolved the issue, but I'm guessing it also changed the array UUID.

For reference, the system contains one boot ssd (/dev/sda) connected to the motherboard and 8 5TB drives for the raid 6 array (/dev/sdb through /dev/sdi) connected to a dell perc H310 in IT Mode.

The output of my mdadm -E for all drives:

/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250792 sectors, after=11696 sectors
          State : active
    Device UUID : 6228ab0d:6a65fce7:b7a2cda8:e246fbf9

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 36d4b421 - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250792 sectors, after=11696 sectors
          State : active
    Device UUID : e08e56d8:a5eb2baf:b9362af3:369a4f84

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 9d60882f - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250792 sectors, after=11696 sectors
          State : active
    Device UUID : 0b4e9aec:42dc5e1c:548f4882:34da967f

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : a93cd08e - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250792 sectors, after=11696 sectors
          State : active
    Device UUID : ea04e5bb:f6c9f9ff:1e68eca2:8e990546

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 43350d45 - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250792 sectors, after=11696 sectors
          State : active
    Device UUID : 03f09a11:5bc14011:8d3f7f4d:bc33d6b9

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : c8956162 - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250800 sectors, after=11696 sectors
          State : clean
    Device UUID : 3b0f7209:ed729a79:2ebc9ab1:07d6e063

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Jul  8 17:26:16 2018
  Bad Block Log : 512 entries available at offset 32 sectors
       Checksum : 36dc9ff0 - correct
         Events : 69286

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 5
   Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdh:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767290288 (4657.41 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250800 sectors, after=11696 sectors
          State : active
    Device UUID : e042a7da:1cb4058f:6a976fa7:f30203e1

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 32 sectors
       Checksum : 9083e1f1 - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 7
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdi:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a625f41:32a13393:978936d5:902719da
           Name : azelphur-server:0  (local to host azelphur-server)
  Creation Time : Thu Mar 23 18:29:16 2017
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 9767285168 (4657.40 GiB 5000.85 GB)
     Array Size : 29301835776 (27944.41 GiB 30005.08 GB)
  Used Dev Size : 9767278592 (4657.40 GiB 5000.85 GB)
    Data Offset : 250880 sectors
   Super Offset : 8 sectors
   Unused Space : before=250800 sectors, after=11696 sectors
          State : active
    Device UUID : 5742d56b:3c9b9f70:35e9550f:349db858

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul 23 18:20:53 2018
  Bad Block Log : 512 entries available at offset 32 sectors
       Checksum : e2e7a091 - correct
         Events : 191578

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 6
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)

The output of lsdrv:

PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04)
└scsi 1:0:0:0 ATA      Samsung SSD 840 
 └sda 465.76g [8:0] Empty/Unknown
  ├sda1 512.00m [8:1] Empty/Unknown
  │└Mounted as /dev/sda1 @ /boot
  ├sda2 457.78g [8:2] Empty/Unknown
  │└Mounted as /dev/sda2 @ /
  └sda3 7.48g [8:3] Empty/Unknown
PCI [mpt3sas] 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
├phy-0:0 scsi 0:0:0:0 ATA      TOSHIBA MD04ACA5
│└sdb 4.55t [8:16] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
├phy-0:1 scsi 0:0:1:0 ATA      TOSHIBA MD04ACA5
│└sdc 4.55t [8:32] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
├phy-0:2 scsi 0:0:2:0 ATA      TOSHIBA MD04ACA5
│└sdd 4.55t [8:48] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
├phy-0:3 scsi 0:0:3:0 ATA      TOSHIBA MD04ACA5
│└sde 4.55t [8:64] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
├phy-0:4 scsi 0:0:4:0 ATA      ST5000DM000-1FK1
│└sdf 4.55t [8:80] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
├phy-0:5 scsi 0:0:5:0 ATA      ST5000DM000-1FK1
│└sdg 4.55t [8:96] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
├phy-0:6 scsi 0:0:7:0 ATA      ST5000DM000-1FK1
│└sdi 4.55t [8:128] Empty/Unknown
│ └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
│                  Empty/Unknown
└phy-0:7 scsi 0:0:6:0 ATA      TOSHIBA MD04ACA5
 └sdh 4.55t [8:112] Empty/Unknown
  └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {None}
                   Empty/Unknown

Smartctl for all drives: https://pastebin.com/raw/04HeUBzw

5
  • Run Spinrite on each Commented Jul 24, 2018 at 15:09
  • Have you tried actually starting the array? It's listed as inactive.
    – cdhowie
    Commented Sep 23, 2018 at 19:32
  • Hi @cdhowie - I ended up resolving this. sdg was broken (Shown in SMART results). I removed it and added a new drive. The array started, rebuilt, and now everything is working.
    – Azelphur
    Commented Sep 25, 2018 at 13:51
  • Glad to hear. It's odd to me that the array didn't start degraded when the disk was missing, though...
    – cdhowie
    Commented Sep 25, 2018 at 16:20
  • @cdhowie I have a feeling that was down to one of two things: A) One of the drives was technically part of the array, and up, but had an incorrect event count (it was behind all the other drives) B) I had a rather bad failure in the past, and had to rebuild the array with --assume-clean to get it back. This changed the arrays UUID, however, I did not update the UUID in /etc/mdadm.conf.
    – Azelphur
    Commented Sep 25, 2018 at 19:42

0

You must log in to answer this question.

Browse other questions tagged .