0

I have two 512GB SSDs setup as a RAID0 on a laptop with a Intel RAID controller. I was running Window 7 Pro on it until I messed up the MBR in the RAID by installing Window 10 and Linux Mint on a different drive in the laptop. After installing Windows 10, both versions of Windows booted and ran okay. After installing Linux Mint the problem initiated, because I'm fairly certain it installed it's boot manager into the RAID MBR even though I was installing that OS into a different drive. Now, the RAID has become unbootable, and the RAID MBR has become unrepairable, because apparently the Windows 10 Boot manager did something to it. And the Window 7 repair tools say they are incompatible with the version of Windows installed in the RAID, even though the repair disk with made by the version of Windows installed in the RAID.

I took Win 10 and the Linux install off the other drive in the laptop and installed a fresh copy of Win7 Pro on that, which upon the first reboot from installation failed to boot and immediately became unrepairable like the first. So, that leads me to the task of how to recover the data in the RAID. Since the RAID system is still on the original machine with the Intel RAID controller which created it, and seems to be otherwise still functioning except the the MBR mess, I would like to mount the RAID with Linux and copy off the data.

Running Mint in live mode, fdisk -l gives me this:

mint@mint:~$ sudo fdisk -l
Disk /dev/loop0: 2.13 GiB, 2285047808 bytes, 4462984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: ADATA SU800     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf3828d07

Device     Boot    Start        End    Sectors   Size Id Type
/dev/sda1           2048   46897151   46895104  22.4G  7 HPFS/NTFS/exFAT
/dev/sda2  *    46897152 2000420863 1953523712 931.5G  7 HPFS/NTFS/exFAT


Disk /dev/sdb: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: ADATA SU800     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10SPZX-00Z
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0b6bc22a

Device     Boot  Start        End    Sectors   Size Id Type
/dev/sdc1  *      2052     205199     203148  99.2M  7 HPFS/NTFS/exFAT
/dev/sdc2       205200 1953522467 1953317268 931.4G  7 HPFS/NTFS/exFAT

Partition 1 does not start on physical sector boundary.


Disk /dev/sdd: 29.25 GiB, 31406948352 bytes, 61341696 sectors
Disk model: Cruzer Glide    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0013118d

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdd1  *     2048 61341695 61339648 29.2G  c W95 FAT32 (LBA)


Disk /dev/mapper/isw_ddiheaeib_SSRAID: 953.88 GiB, 1024215744512 bytes, 2000421376 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disklabel type: dos
Disk identifier: 0xf3828d07

Device                                 Boot    Start        End    Sectors   Size Id Type
/dev/mapper/isw_ddiheaeib_SSRAID-part1          2048   46897151   46895104  22.4G  7 HPFS
/dev/mapper/isw_ddiheaeib_SSRAID-part2 *    46897152 2000420863 1953523712 931.5G  7 HPFS

At the end, the details about the RAID and devices /dev...part1 and /dev...part2 seems to be sane and correct, which leads me to believe the RAID is still working except the MBR failure. I tried mounting the part2 since part1 is the Win7 recovery image for the installation in part2. And I got this:

mint@mint:~$ sudo mount -o ro -t hpfs /dev/mapper/isw_ddiheaeib_SSRAID-part2 /mnt/win7
mount: /mnt/win7: special device /dev/mapper/isw_ddiheaeib_SSRAID-part2 does not exist.

I fail to comprehend that last result, which that brings me to my question:

Does the mount failure indicate a problem with the RAID, or is that just the wrong way to go about mounting it?

...

I see now I was going about the mounting wrong, or at least expecting Linux Mint and the Intel RAID controller to work together. I've been looking up posts about hardware RAID and Linux and they generally imply it fails to work well. So, having another look at my machine, I get this:

mint@mint:/$ lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINTS
loop0                    7:0    0   2.1G  1 loop   /rofs
sda                      8:0    0 476.9G  0 disk   
└─isw_ddiheaeib_SSRAID 253:0    0 953.9G  0 dmraid 
sdb                      8:16   0 476.9G  0 disk   
└─isw_ddiheaeib_SSRAID 253:0    0 953.9G  0 dmraid 

I suppose I could use those to software mount the RAID, which will be my pre-second time for doing that.

...

With a little more reading, I tried this:

mint@mint:/$ sudo mount -o ro -t ntfs /dev/mapper/isw_ddiheaeib_SSRAID-part2 /mnt/RAID
ntfs-3g: Failed to access volume '/dev/mapper/isw_ddiheaeib_SSRAID-part2': No such file or directory

When Linux Mint tried to install it's boot loader, it tried to do that at its default location, which is apparently sda rather than the linux install drive sdc, which is where I wanted it to go. So much for presumptions...

Anyway, that damaged the legacy MBR in the RAID. I am presuming, again--it's a hard habit to stop, that it also tried to install the partition table for the linux install drive, sdc, into the RAID MBR, which is why I suppose Windows and linux are both apparently are having trouble finding useful there. It's encouraging at least that the device mapper reads the RAID partitions correctly.

1
  • I see now I was going about the mounting wrong, or at least expecting Linux Mint and the Intel RAID controller to work together. I've been looking up posts about hardware RAID and Linux and they generally imply it fails to work well. So, having another look at my machine, I see this:
    – fred_b
    Commented Sep 8, 2023 at 7:35

1 Answer 1

1

Does the mount failure indicate a problem with the RAID, or is that just the wrong way to go about mounting it?

There are two problems with it:

  1. -t hpfs is wrong. What fdisk shows as "type" is not really filesystem type – the field was originally meant to correspond to the partition's contents but doesn't always; it is more of a "usage/purpose" field (this particular value of "7 (HPFS/NTFS/exFAT)" merely indicating "this is a Windows NT data partition").

    You can be sure that the partition does not contain a HPFS filesystem – the last Windows version that supported this relic from IBM OS/2 was NT 3.51 from 1995. So in your case it's really going to be NTFS (-t ntfs-3g or -t ntfs3).

  2. -part2 is made up. As fdisk only reads the "whole disk" device to obtain its partition table, it does some best effort translation from "partition 2 on /dev/something" to that partition's /dev name, but those translations do not always correspond to reality.

    What devices you see in lsblk are the devices you actually have. Since it's a dm-raid device, the kernel will not automatically detect partitions in it – it expects dm (device-mapper) to handle that task as well. Install and use kpartx (from the "multipath-tools" package) to have it set up dm-based partition devices on top of this RAID device.

or at least expecting Linux Mint and the Intel RAID controller to work together. I've been looking up posts about hardware RAID and Linux and they generally imply it fails to work well

You're not using the Intel RAID controller here at all. Your array is being assembled in software, using Linux's dm-raid subsystem (one of the two software-RAID subsystems it has).

2
  • I added some comments addressing exactly what you mentioned, apparently at the same time your were posted your answer. I thought that was weird, although that's what fstab reported.
    – fred_b
    Commented Sep 8, 2023 at 8:22
  • Long story short, it was the UEFI boot options that got stored in BIOS memory by Windows 10 and Linux Mint that made the whole thing a big mystery to me. All that stuff is new to me. That on top my missing that Linux Mint was going to write its GPT stuff in to my legacy RAID MBR made quite the mess. Anyway, I got it sorted out, and it's all good now.
    – fred_b
    Commented Sep 9, 2023 at 4:30

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .