0

I'll start with the question: Based on the following information, is there a mdadm command I can make in the readyNAS to reassemble all four drives in the raid 5 in a working state where I can pull off the data before moving forwards using create or assemble?

The patient is a ReadyNAS 314 with four physical 3TB WD Red drives configured in RAID 5

The sorry episode started when, after 10 years, I discovered I should have been doing some maintenance. One defrag later (should have done balance first apparently), my healthy array was reporting as degraded. However, I was somewhat confused by the fact that all of the physical drives were showing as healthy in the ReadyNAS interface. Consequently, I ran a disk test in the ReadyNAS interface that reported that Disk 3 (sdc) had errors. Having looked at each disk's report in smartctl, sda, sdb, and sdd all completed an extended SMART test with no errors.

I realise that this is the point where I should have recognised I didn't know what I was doing and just got as much data off as possible. Anyone who wishes to point out to me that 'RAID is not backup' is also welcome to point out to me the piece of advice I've ignored all this time and learnt the hard way!

However... I removed drive sdc, connected it to my linux machine and ran smartctl for it to tell me there were some block errors. At this point, I decided I should just put it back and backup.

Unfortunately, when I put it back, this greeted me in the web interface of the ReadyNAS.

Image of "physical disks page":

Image of physical disks page

Image of "no volumes":

Image of no volumes

mdstat info

Welcome to ReadyNASOS 6.10.10
root@PennyNAS:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
md0 : active raid1 sda1[0] sdd1[3] sdb1[5] sdc1[4]
      4192192 blocks super 1.2 [4/4] [UUUU]
      
unused devices: <none>

Looking for RAID volumes

root@PennyNAS:~# mdadm --examine --scan
ARRAY /dev/md/0  metadata=1.2 UUID=7aa7aa0b:ad68304e:2ca7ceaa:da734a1a name=7c6e95c6:0
ARRAY /dev/md/1  metadata=1.2 UUID=2e8087fd:07b6aba3:1cba7d4d:2b46e6b0 name=7c6e95c6:1
ARRAY /dev/md/data-0  metadata=1.2 UUID=3e2ffdb3:dbbff281:3c928ee2:2c12bc50 name=7c6e95c6:data-0

lsblk output

root@PennyNAS:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE   MOUNTPOINT
sda       8:0    0  2.7T  0 disk   
├─sda1    8:1    0    4G  0 part   
│ └─md0   9:0    0    4G  0 raid1  /
├─sda2    8:2    0  512M  0 part   
│ └─md1   9:1    0 1020M  0 raid10 [SWAP]
└─sda3    8:3    0  2.7T  0 part   
sdb       8:16   0  2.7T  0 disk   
├─sdb1    8:17   0    4G  0 part   
│ └─md0   9:0    0    4G  0 raid1  /
├─sdb2    8:18   0  512M  0 part   
│ └─md1   9:1    0 1020M  0 raid10 [SWAP]
└─sdb3    8:19   0  2.7T  0 part   
sdc       8:32   0  2.7T  0 disk   
├─sdc1    8:33   0    4G  0 part   
│ └─md0   9:0    0    4G  0 raid1  /
├─sdc2    8:34   0  512M  0 part   
│ └─md1   9:1    0 1020M  0 raid10 [SWAP]
└─sdc3    8:35   0  2.7T  0 part   
sdd       8:48   0  2.7T  0 disk   
├─sdd1    8:49   0    4G  0 part   
│ └─md0   9:0    0    4G  0 raid1  /
├─sdd2    8:50   0  512M  0 part   
│ └─md1   9:1    0 1020M  0 raid10 [SWAP]
└─sdd3    8:51   0  2.7T  0 part  

mdadm of superblock on all four physical drives

root@PennyNAS:~# mdadm --examine /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 3e2ffdb3:dbbff281:3c928ee2:2c12bc50
           Name : 7c6e95c6:data-0  (local to host 7c6e95c6)
  Creation Time : Wed Feb  4 18:39:38 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5850829681 (2789.89 GiB 2995.62 GB)
     Array Size : 8776243968 (8369.68 GiB 8986.87 GB)
  Used Dev Size : 5850829312 (2789.89 GiB 2995.62 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=369 sectors
          State : clean
    Device UUID : 83fc3c9f:85b9851f:b854af95:83c9757b

    Update Time : Mon May 27 20:26:25 2024
       Checksum : 66e05ca4 - correct
         Events : 11227

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
root@PennyNAS:~# mdadm --examine /dev/sdb3
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 3e2ffdb3:dbbff281:3c928ee2:2c12bc50
           Name : 7c6e95c6:data-0  (local to host 7c6e95c6)
  Creation Time : Wed Feb  4 18:39:38 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5850829681 (2789.89 GiB 2995.62 GB)
     Array Size : 8776243968 (8369.68 GiB 8986.87 GB)
  Used Dev Size : 5850829312 (2789.89 GiB 2995.62 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=369 sectors
          State : active
    Device UUID : 54dfc0af:7e89f0be:61e09348:1a6e4d9c

    Update Time : Sun May 26 01:14:08 2024
       Checksum : ea8bf5b2 - correct
         Events : 56

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@PennyNAS:~# mdadm --examine /dev/sdc3
mdadm: No md superblock detected on /dev/sdc3.
root@PennyNAS:~# mdadm --examine /dev/sdd3
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 3e2ffdb3:dbbff281:3c928ee2:2c12bc50
           Name : 7c6e95c6:data-0  (local to host 7c6e95c6)
  Creation Time : Wed Feb  4 18:39:38 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5850829681 (2789.89 GiB 2995.62 GB)
     Array Size : 8776243968 (8369.68 GiB 8986.87 GB)
  Used Dev Size : 5850829312 (2789.89 GiB 2995.62 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=369 sectors
          State : clean
    Device UUID : ee789aa7:031ca8a3:5fe4e9d8:cf838550

    Update Time : Mon May 27 20:26:25 2024
       Checksum : baa8584 - correct
         Events : 11227

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)

fdisk -l output

root@PennyNAS:~# fdisk -l

Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D7088952-4111-4348-A213-05743C3D5EB8

Device       Start        End    Sectors  Size Type
/dev/sda1       64    8388671    8388608    4G Linux RAID
/dev/sda2  8388672    9437247    1048576  512M Linux RAID
/dev/sda3  9437248 5860529072 5851091825  2.7T Linux RAID

Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 89716F11-83B9-4A4A-AA86-3664BAC5BABD

Device       Start        End    Sectors  Size Type
/dev/sdb1       64    8388671    8388608    4G Linux RAID
/dev/sdb2  8388672    9437247    1048576  512M Linux RAID
/dev/sdb3  9437248 5860529072 5851091825  2.7T Linux RAID

Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: DF4E9C1A-0613-4AD9-9144-5FE661EB3ED0

Device       Start        End    Sectors  Size Type
/dev/sdc1       64    8388671    8388608    4G Linux RAID
/dev/sdc2  8388672    9437247    1048576  512M Linux RAID
/dev/sdc3  9437248 5860529072 5851091825  2.7T Linux RAID

Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CED9E984-0040-455E-82C9-E07D6C662CA6

Device       Start        End    Sectors  Size Type
/dev/sdd1       64    8388671    8388608    4G Linux RAID
/dev/sdd2  8388672    9437247    1048576  512M Linux RAID
/dev/sdd3  9437248 5860529072 5851091825  2.7T Linux RAID

Disk /dev/md0: 4 GiB, 4292804608 bytes, 8384384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md1: 1020 MiB, 1069547520 bytes, 2088960 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
root@PennyNAS:~# 

What I think has happened is that removing sdc and plugging it to another computer has somehow changed the superblock so that the Readynas no longer recognises the superblock, and because the RAID 5 was somehow degraded without any one drive being completely wrecked, now this drive has the missing superblock, it can't mount the raid array?

I've been looking at things to do with mdadmn to tell the three disks about the fourth disk with a missing superblock, but at this point I just don't want to do any more damage.

Is there some way of running assemble that tells the ReadyNAS what role sdc is supposed to have in the array?

The other thing I noticed is that sdb has a slightly different state (AAAA) to sda and sdd (A.AA). What's that about?

I just really want to get it recognised again so I can get the data off at this point. Many thanks in advance.

UPDATE: I decided to try a verbose mdadm assemble and got this output:

root@PennyNAS:~# mdadm --assemble --scan --verbose
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/md/1
mdadm: no recogniseable superblock on /dev/md/0
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: No super block found on /dev/sdd (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdd
mdadm: Cannot read superblock on /dev/sdc3
mdadm: no RAID superblock on /dev/sdc3
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sdd3 is identified as a member of /dev/md/data-0, slot 3.
mdadm: /dev/sdb3 is identified as a member of /dev/md/data-0, slot 1.
mdadm: /dev/sda3 is identified as a member of /dev/md/data-0, slot 0.
mdadm: added /dev/sdb3 to /dev/md/data-0 as 1 (possibly out of date)
mdadm: no uptodate device for slot 2 of /dev/md/data-0
mdadm: added /dev/sdd3 to /dev/md/data-0 as 3
mdadm: added /dev/sda3 to /dev/md/data-0 as 0
mdadm: /dev/md/data-0 assembled from 2 drives - not enough to start the array.
mdadm: looking for devices for further assembly
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: No arrays found in config file or automatically

It's clear that sda is slot 0, sdb is slot 1, sdc is slot 2, and sdd is slot 3. So is there a create or assemble command that I could use that can explicitly tell it the roles of the respective drives where the superblock info isn't there?

15
  • You should be able to add a new drive in place of the "faulty" one and it will rebuild automatically. Just hope that no other drive fails during the rebuild (Netgear call it resynchronize). Search term: "readynas replace hard drive". Commented May 28 at 16:33
  • Take extra care to be sure that the replacement drive does not use SMR. SMR is not good for RAID, e.g., What's the current take on which WD Red drives are ok for RAID?. Commented May 28 at 16:38
  • Ah, after seeing the pictures, does it work if you remove the "faulty" drive? Commented May 28 at 16:47
  • 1
    Hi Andrew, that's the concerning bit, because it's not working with the faulty drive removed, but I'm inclined to leave it powered down for now at least until a replacement drive has arrived. Thanks for your interest!
    – Denarius
    Commented May 28 at 17:00
  • 1
    Hi Andrew. i've had to put this on the back burner, but I'll be following up your R-studio suggestion as soon as I get a chance. Thanks for your input so far!
    – Denarius
    Commented Jun 8 at 14:42

0

You must log in to answer this question.

Browse other questions tagged .