11

RAID is not a backup, they say, but only now I can see that what I really need is an (external) backup.

So, I would like to convert a (software) RAID-1 partition to a non-RAID partition (ext4) in my Linux system (Debian 7), but I am clueless how to do it.

My goal is to remove one of the two internal drives of the current RAID 1 setup and use it as a external backup drive, so I can preserve the data on another physical place.

Is there any way to do this conversion to non-RAID without formatting the current RAID partition (/home) in the (future) single internal drive?

Thanks for any advice, Marcio

2
  • What software are you using? mdadm or similar?
    – nKn
    Commented Sep 11, 2015 at 11:50
  • I am using mdadm. Commented Sep 11, 2015 at 12:12

2 Answers 2

4

This is what I would do to safely remove a RAID-1 managed by mdadm:

  1. Run fdisk -l. This will tell you how many and which arrays you have. In following steps, I'm assuming you only have /dev/md0.

  2. Run mdadm --detail /dev/md0. This will give you information about which physical disks are in use.

  3. Run umount -l /dev/md0, which will allow you to later stop your RAID. The -l flag will do the following, as per its man page:

    -l Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. (Requires kernel 2.4.11 or later.)

  4. Run mdadm --stop /dev/md0. This will stop your RAID array.

  5. Erase the superblock on each device in the RAID (should be detailed in the command run in step 2).

    mdadm --zero-superblock /dev/sda
    mdadm --zero-superblock /dev/sdb
    ...
    

That should be it.

5
  • nKn, thank you for your answer, but I could not make it work here. I believe it´s due to the fact that md0 is root partition and md1 is home partition. I don't know any way to stop root partition. I have tried booting from another disk but on the --zero-superblock step I got a message saying that the partition was not available to write. Commented Sep 12, 2015 at 21:01
  • You can boot your machine from a LiveCD like Finnix, for instance, and manage it from there.
    – nKn
    Commented Sep 12, 2015 at 21:18
  • nKn, I have tried with a LiveCD but could not access the disk. My RAID system started to fail some days ago, and I made a backup and tried your sugestions, but could not make it work. I have decided to format the disks, give up of RAID, set one internal disk only for /home and the other disk (of the previous RAID) will become an external backup unit. Another smaller disk is now my /boot/root/var. Thank you again for your time and kindness. Commented Sep 14, 2015 at 19:43
  • Unrecognised md component device - /dev/sda ???
    – dvdhns
    Commented Oct 9, 2019 at 3:23
  • Instead of issuing fdisk -l and mdadm --detail ..., you may use cat /proc/mdstat which prints all existing md* together with their members at a glance.
    – Christoph
    Commented Apr 24, 2023 at 16:04
3

mdadm --zero-superblock may not be enough, the partition information may need to be updated, too.

What worked for me to successfully convert an 8TB drive with one mdadm RAID1 GPT partition /dev/sda1 formatted with XFS to a "regular" drive/partition is:

  1. cat /proc/mdstat shows all your RAID devices and their components, e.g.
...
md0 : active raid1 sdb1[2] sda1[1]
      7813893952 blocks super 1.2 [2/2] [UU]
...
  1. To convert /dev/sda1 of RAID device /dev/md0 to a regular drive first umount /dev/md0.
  2. Then mdadm --stop /dev/md0 to stop the RAID.
  3. mdadm --zero-superblock /dev/sda1 converts to "regular" drive/partition.
  4. Change file system type with fdisk to 20 (Linux filesystem).
  5. Fix partition table with testdisk /dev/sda.

This should work for an Ext4 file system also.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .