2

I have a disk that used to be inside my qnap NAS (it was in raid1 with another disk, which I still have). The NAS died and I mounted the disk on my ubuntu machine using mdadm to create the raid array and then followed this to successfully mount and read it.

Then I shut down the laptop and when I turned it back on I couldn't mount anything (mdadm was returning errors saying devices were busy). I tried a bunch of stuff (rebooting as well) but couldn't mount it back.

Out of frustration I had the bad idea to run mdadm --zero-superblock on every partition of the disk and added more troubles as I am now unable to do anything (except insulting myself).

lsblk returns this:

sdc        8:32   0   2,7T  0 disk  
├─sdc1     8:33   0 517,7M  0 part  
│ └─md11   9:11   0 516,7M  0 raid1 
├─sdc2     8:34   0 517,7M  0 part  
│ └─md12   9:12   0 516,7M  0 raid1 
├─sdc3     8:35   0   2,7T  0 part  
├─sdc4     8:36   0 517,7M  0 part  /media/myuser/2ed9e1b2-2659-4202-9066-bd3246353f1d
└─sdc5     8:37   0     8G  0 part  
  └─md15   9:15   0     8G  0 raid1 
md13       9:13   0     0B  0 md    

The disk has indeed 5 partitions and I need to access /dev/sdc3. To my surprise, Ubuntu is able to mount /dev/sdc4 without even asking.

When I run lvm fullreport I get:

lvm fullreport
  WARNING: wrong checksum 0 in mda header on /dev/sdc3 at 4096
  WARNING: wrong magic number in mda header on /dev/sdc3 at 4096
  WARNING: wrong version 0 in mda header on /dev/sdc3 at 4096
  WARNING: wrong start sector 0 in mda header on /dev/sdc3 at 4096
  WARNING: bad metadata header on /dev/sdc3 at 4096.
  WARNING: scanning /dev/sdc3 mda1 failed to read metadata summary.
  WARNING: repair VG metadata on /dev/sdc3 with vgck --updatemetadata.
  WARNING: scan failed to get metadata summary from /dev/sdc3 PVID A0oBICVnS2UPgax0Y8H2F5lgP2L3Xa3D
  Fmt  PV UUID                                DevSize PV         Maj Min PMdaFree  PMdaSize  PExtVsn 1st PE  PSize  PFree  Used Attr Allocatable Exported   Missing    PE  Alloc PV Tags #PMda #PMdaUse BA Start BA Size PInUse Duplicate
  lvm2 A0oBIC-VnS2-UPga-x0Y8-H2F5-lgP2-L3Xa3D  <2,72t /dev/sdc3  8   35         0         0        1   1,00m <2,72t <2,72t   0  ---                                      0     0             0        0       0       0                  
  Start SSize PV UUID                                LV UUID                               
      0     0 A0oBIC-VnS2-UPga-x0Y8-H2F5-lgP2-L3Xa3D   

I have some things in /etc/lvm that seem backups, but I messed up enough already and I don't want to do more damage.

Searching in this and other forums I tried many things, most notably

mdadm --create --assume-clean /dev/md0 --level=1 --raid-devices=2 /dev/sdc3 missing

but then if I try to mount the /dev/md0 I get

mount: /mnt/1: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

and lvm fullreport doesn't give any outputs.

Any help would be highly appreciated, I have the other raid disk that should contain the same info but before I mess that up as well I'd like to see if I can recover this one.

EDIT 1

Output of blkid:

/dev/sdc: PTUUID="170dad0b-6e63-4bbc-835e-9c7e901e3d4d" PTTYPE="gpt"
/dev/sdc2: UUID="2d0918c5-2ae2-6736-cb3a-9dfb2f014dc0" UUID_SUB="7b071e98-21b9-2459-7b3d-232247796426" LABEL="home-x1-carbon:12" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="61b68a0d-0ebc-4379-9d8e-9694a0edc09d"
/dev/sdc5: UUID="2d279112-312e-1a89-d597-da4cc01c2a92" UUID_SUB="72243364-539b-6614-53c2-374e28a6a4e7" LABEL="home-x1-carbon:15" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="29af1fc5-57ab-4da4-ac38-d3d9955072b5"
/dev/sdc3: UUID="01f77d36-7ed1-f4be-6948-820e41b0956a" UUID_SUB="c78c6ff9-63bb-002f-93a4-c901bdb14307" LABEL="home-x1-carbon:0" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="4feebb7b-0a90-490c-a516-3fdca113592f"
/dev/sdc1: UUID="4a443aca-7fbe-197c-0bcb-72ec6c0c3ce0" UUID_SUB="55bf9203-a7ca-a8a3-4cdb-5cef7f12e54f" LABEL="home-x1-carbon:11" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="22efde15-2e64-4cf5-b220-21ee5c57aed2"
/dev/sdc4: UUID="2ed9e1b2-2659-4202-9066-bd3246353f1d" BLOCK_SIZE="4096" TYPE="ext3" PARTLABEL="primary" PARTUUID="d5ae2202-0f7b-4c30-b7c3-f9d84de4b14e"

blkid /dev/md0 is empty.

EDIT 2

After trying this:

# mdadm --create --assume-clean /dev/md0 --level=1 --raid-devices=2 /dev/sdc3 missing
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

lsblk now shows /dev/md0:

sdc        8:32   0   2,7T  0 disk  
├─sdc1     8:33   0 517,7M  0 part  
│ └─md11   9:11   0 516,7M  0 raid1 
├─sdc2     8:34   0 517,7M  0 part  
│ └─md12   9:12   0 516,7M  0 raid1 
├─sdc3     8:35   0   2,7T  0 part  
│ └─md0    9:0    0   2,7T  0 raid1 
├─sdc4     8:36   0 517,7M  0 part  /media/dani/2ed9e1b2-2659-4202-9066-bd3246353f1d
└─sdc5     8:37   0     8G  0 part  
  └─md15   9:15   0     8G  0 raid1 
md13       9:13   0     0B  0 md    

but I still cannot mount it:

# mount /dev/md0 /mnt/1
mount: /mnt/1: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
12
  • Could you please provide the output of blkid ? At least on all sdc partitions. - What is the output of blkid on /dev/md0 ? Thanks.
    – hidigoudi
    Commented Jan 5 at 20:59
  • I edited the question with the requested info. thank you @hidigoudi
    – Daniele
    Commented Jan 5 at 22:37
  • Thanks, have you successfully performed this command : mdadm --create --assume-clean /dev/md0 --level=1 --raid-devices=2 /dev/sdc3 missing (without error) ? - You are able to mount the /dev/sdc4 because it's an ext3 partition, not a md device.
    – hidigoudi
    Commented Jan 6 at 8:25
  • Do you see the /dev/md0 in the lsblk output ? - If no, it's normal you are not able to mount it. - The mdadm --zero-superblock command removes the metadata that describes the raid array, you should always have the data themself.
    – hidigoudi
    Commented Jan 6 at 8:26
  • yes, i performed the mdadm --create and got no errors: added the details above. I do see md0 in lsblk now.
    – Daniele
    Commented Jan 6 at 9:32

1 Answer 1

2

You should be able to recover your RAID array with the second drive. By performing mdadm --zero-superblock on all partitions, you destroyed all the md superblock with zeros. Below your /dev/sdc3 you have a LVM tree with VGs / LVs which cannot be recovered with that drive.

If you connect the second drive, you should be able to mount your LV below the RAID device. Your architecture looks like (based on our discussion) :

└─sdc3                      8:51   0  2.7T  0 part   
  └─md1                     9:1    0  2.7T  0 raid1 
      ├─vg_lv0 252:1    0 27,9G  0 lvm
      └─vg_lv1 252:2    0  2.7T  0 lvm

With this above schema (lsblk output), the blkid output is :

/dev/mapper/vg_lv1: LABEL="nas-qnap" UUID="75debd37-77c6-4f98-a6b6-8acc2a9f141f" BLOCK_SIZE="4096" TYPE="ext4"

As you can see, it's an ext4 partition. You should be able to mount it with :

mount /dev/vg_lv1 /mnt
mount /dev/mapper/vg_lv1 /mnt

However, it's not enough at this point. You have a degraded array (mdadm --detail /dev/md1). To recover a clean state, you need to connect your broken drive and add the respective partitions on the different md devices. Once both drives connected, you should have the broken one like that :

sdc        8:32   0   2,7T  0 disk  
├─sdc1     8:33   0 517,7M  0 part  
│ └─md11   9:11   0 516,7M  0 raid1 
├─sdc2     8:34   0 517,7M  0 part  
│ └─md12   9:12   0 516,7M  0 raid1 
├─sdc3     8:35   0   2,7T  0 part  
│ └─md0    9:0    0   2,7T  0 raid1 
├─sdc4     8:36   0 517,7M  0 part  /media/dani/2ed9e1b2-2659-4202-9066-bd3246353f1d
└─sdc5     8:37   0     8G  0 part  
  └─md15   9:15   0     8G  0 raid1 

As you can see above, you have 4 md devices. Here is the list according to the above schema :

  • /dev/md11
  • /dev/md12
  • /dev/md15
  • /dev/md0

The /dev/sdc4 is only an ext3 files system partition. Before adding those array, you need to stop them like this :

mdadm --stop /dev/md11
mdadm --stop /dev/md12
mdadm --stop /dev/md15
mdadm --stop /dev/md0

Then you should be able to add them in the correct array with this :

mdadm /dev/mdX --add /dev/sdY1
mdadm /dev/mdX --add /dev/sdY2
mdadm /dev/mdX --add /dev/sdY3
mdadm /dev/mdX --add /dev/sdY5

The X corresponds to the md device on the clean drive. The Y corresponds to the broken drive. With RAID 1, it should be "easy" to match partitions as it's a mirroring.

Assuming, your clean drive looks like :

sdb 8:16 0 2,7T 0 disk
  ├─sdb1 8:17 0 517,7M 0 part
  │ └─md9 9:9 0 517,6M 0 raid1
  ├─sdb2 8:18 0 517,7M 0 part
  │ └─md256 9:256 0 0B 0 md
  ├─sdb3 8:19 0 2,7T 0 part
  │ └─md1 9:1 0 2,7T 0 raid1
  │   ├─vg_lv0
  │   │ 253:0 0 27,9G 0 lvm
  │   └─vg_lv1
  │      253:1 0 2,7T 0 lvm
  ├─sdb4 8:20 0 517,7M 0 part
  └─sdb5 8:21 0 8G 0 part
    └─md322 9:322 0 0B 0 md

The commands to add should be :

mdadm /dev/md9 --add /dev/sdc1
mdadm /dev/md256 --add /dev/sdc2
mdadm /dev/md1 --add /dev/sdc3
mdadm /dev/md322 --add /dev/sdc5

You can consult the resync speed and time remaining of your arrays with :

watch -n0.5 cat /proc/mdstat

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .