3

My Promise NAS NS4300N recently died (PSU or motherboard failure, probably the former as it has trouble spinning up the disks).

I've managed to dd(1) the drives (4 500GB drives in a RAID5 configuration) as images on a new server, even though one of the drives had a couple of read errors (conv=noerror ftw...).

However, as the Promise NAS doesn't use mdadm(8) for RAID but instead uses "hardware" RAID (aka FakeRAID), the resulting images look like this:

$ fdisk -l /local/media/promise.dd.1
Disk /local/media/promise.dd.1: 465.8 GiB, 500106174464 bytes, 976769872 sectors 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes 
I/O size (minimum/optimal): 512 bytes / 512 bytes 
Disklabel type: dos 
Disk identifier: 0xb95a0900

Device                      Boot Start        End    Sectors  Size Id Type 
/local/media/promise.dd.1p1         63 2929918634 2929918572  1.4T 83 Linux

$ fdisk -l /local/media/promise.dd.2 
Disk /local/media/promise.dd.2: 465.8 GiB, 500107862016 bytes, 976773168 sectors 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

$ fdisk -l /local/media/promise.dd.3 
Disk /local/media/promise.dd.3: 465.8 GiB, 500107862016 bytes, 976773168 sectors 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

$ fdisk -l /local/media/promise.dd.4 
Disk /local/media/promise.dd.4: 465.8 GiB, 500107862016 bytes, 976773168 sectors 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes 
Disklabel type: dos
Disk identifier: 0xb95a0900

Device                      Boot Start        End    Sectors  Size Id Type 
/local/media/promise.dd.4p1         63 2929918634 2929918572  1.4T 83 Linux

When mounted as loop(4) devices, the images look like this:

$ sudo lsblk -io NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL 
NAME               FSTYPE                            SIZE    MOUNTPOINT     LABEL 
..     
loop0              promise_fasttrack_raid_member     465.8G
loop1              promise_fasttrack_raid_member     465.8G
loop2              promise_fasttrack_raid_member     465.8G
loop3              promise_fasttrack_raid_member     465.8G

Unsurprisingly, mdadm(8) is unable to read these, as it is unable to find a usable superblock:

$ sudo mdadm --verbose --examine /dev/loop0
/dev/loop0:
   MBR Magic : aa55
Partition[0] :   2929918572 sectors at           63 (type 83)
$ sudo mdadm --verbose --examine /dev/loop1
mdadm: No md superblock detected on /dev/loop1.

And of course:

$ sudo mdadm --verbose -A /dev/md127 --readonly --run /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
mdadm: looking for devices for /dev/md127
mdadm: no recogniseable superblock on /dev/loop1
mdadm: /dev/loop1 has no superblock - assembly aborted

I thought I could try to read/examine these using dmraid(8), as this is advertised as a tool to "discover, configure and activate software (ATA)RAID". But as far as I can tell, this statement is only true if the drives are exposed through the BIOS, which these clearly are not, being that they are loop(4) devices:

$ sudo dmraid -ay
no raid disks

Do I have any chance at recovering the data via software? Or is my only option to find hardware that can read physical drives with the data on them (e.g. a Promise PCI card)?

Thanks for reading.

5
  • What RAID level we're they using?
    – davidgo
    Commented May 17, 2017 at 20:23
  • The Promise NAS drives were in a 4 disk RAID5 configuration.
    – thoughtbox
    Commented May 18, 2017 at 13:44
  • In the meanwhile, I'd like to add that to circumvent this particular situation, I first tried to mount the disks with a Promise FastTrack 4300 PCI card. This didn't work - probably because I afterwards discovered that the PCI card did not support RAID5. So, what I did in the end was to look at the NAS PSU header. It looked very much like an ATX header. And it was. Powering it using an ATX supply from a normal desktop computer worked. I am recovering the data now. Not really the solution I was looking for, but I lucked out.
    – thoughtbox
    Commented Jun 4, 2017 at 13:44
  • If you're still interested, the commands Kamil Maciorowski posted in the comments of one of my questions worked for what seems to be a similar situation. If you still are interested, try those commands.
    – awksp
    Commented Jul 16, 2018 at 20:36
  • @awksp I'm afraid RAID5 makes it more difficult than our case… Commented Jul 17, 2018 at 1:19

1 Answer 1

1

I found my self in the same situation. I was able to recover the data using the demo version of R-Studio to reconstruct a virtual RAID array mostly as described here:

https://www.r-studio.com/automatic-raid-detection.html

Under the "Drive" menu, select "Open Image" and import the drive images previously created using dd.

In the "Create Virtual RAID" drop down menu select "Create Virtual RAID & Autodetect", then drag and drop the opened drive image files from the tree view (on the left) to the virtual RAID device list panel (on the top right).

Click "Auto Detect". The process should complete in a few seconds. In my case it detected a 32k block size, RAID5, Left Async (Continuous) with 50.2% confidence. It also displayed the text book diagram of a RAID 5 array. Click "Apply".

Back in the "Device view" tree, there should now be a partition and volume group (e.g.: "vg002-lv001") with an ext3 file system. Select this entry and click the "Create Image" button.

In the resulting dialog box, select "Byte to byte image", choose an appropriate path and file name. Click "Ok" and then wait overnight for the process to complete. Make sure you have enough room for the resulting image. (For 4 x 500 GB drives in RAID5 that will be approximately 1.5 TB.)

The resulting image file can be loop mounted to access the data. E.g.

# mkdir /mnt/promise
# mount -o loop vg002-lv001.img /mnt/promise/

I also found it convenient to remap the UIDs and GIDs before running rsync to copy the data:

# mkdir /mnt/promise-bindfs
# bindfs --map=1001/1000:@499/@100 /mnt/promise /mnt/promise-bindfs

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .