3

There is some information available on shape changes in RAID arrays but I'm a little nervous and would like confirmation:

Problem: I have 2 500GB drive as software raid 5 (mdadm). I would like to free one of the two drives since RAID-redundancy is for wimps... Can I just

mdadm --grow --array-size=1

followed by a

mdadm --grow --raid-disks 1?

This seems too simple. How would I specify which drive gets freed? Part of the reason for this maneuver is that I don't have additional space to run a backup.

Edit: As it is, this is a non-std RAID5 implementation (see comments by Dave M or gman). However, please don't chastise me for recklessness. I am simply interested in the least risky method of doing this drive removal. Let's assume I have taken care of the backup issue but I'm not going to use it to rebuild from backup.


$ sudo mdadm --detail --test /dev/md1 
/dev/md1:
        Version : 00.90
  Creation Time : Sat Sep  1 18:08:21 2007
     Raid Level : raid5
     Array Size : 488383936 (465.76 GiB 500.11 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

Update Time : Mon Nov 28 11:32:13 2011
      State : clean

Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 64K

       UUID : XXX (local to host XXX)
     Events : 0.29336

Number   Major   Minor   RaidDevice State
   0       8       33        0      active sync   /dev/sdc1
   1       8       17        1      active sync   /dev/sdb1

7
  • 2
    RAID 5 requires threee drives.
    – Dave M
    Commented Nov 28, 2011 at 18:31
  • RAID5 requires a minimum of 3 drives, so you do not have true RAID5. Level-changing support is fleeting at best, and if you're at all sensitive to data loss I suggest against it; especially considering your non-standard implementation. Further, AFAIK you cannot migrate from a parity-based RAID level to a non-parity one or to a non-RAID disk either.
    – Garrett
    Commented Nov 28, 2011 at 18:38
  • Actually, you can do RAID5 over 3 partitions. mdadm doesn't necessarily care if you have three real drives, or just three partitions. I've done RAID1 on a single drive. I'm not suggesting this is a good idea, mind. A single drive failure can obviously hose multiple partitions. Commented Nov 28, 2011 at 19:28
  • DaveM and gman thanks for the RAID5 correction. Editing accordingly.
    – DrSAR
    Commented Nov 28, 2011 at 19:42
  • As there is no parity disk, this is really just RAID0 right?
    – Paul
    Commented Nov 28, 2011 at 21:45

2 Answers 2

4

With mdadm, a 2 drive RAID 5 is binary identical to a RAID1, not RAID 0, and there's no magical invisible device. You can tell because the array is the same size as each of the two components, not their sum:

Array Size : 488383936 (465.76 GiB 500.11 GB)
Used Dev Size : 488383936 (465.76 GiB 500.11 GB)

You can confirm that by doing:

 dd if=/dev/sdb1 bs=512 count=1024 of=/tmp/b1
 dd if=/dev/sdc1 bs=512 count=1024 of=/tmp/c1

 md5sum /tmp/b1
 md5sum /tmp/c1

The md5 is the same for each because the drives are redundant. Since this is the same as RAID, after stopping it we can either create a RAID1 on the same devices and have the same data:

mdadm -C /dev/md1  --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

Or do a RAID1 with just one device, freeing up the other:

mdadm -C /dev/md1  --level=1 --raid-devices=1 --force /dev/sdb1

Then clear the superblock on the removed one:

mdadm /dev/sdc1 --zero-superblock

Because it's a mdadm super block version 0.90, each drive should be usable on its own as well. Since 1.1 and 1.2 put the meta data near the beginning on the array it won't work for those versions.

1
  • Ray, please do not rollback the edit again. I fixed your formatting.
    – Gareth
    Commented Dec 12, 2011 at 8:22
1

I realize this question was answered several years ago, but I recently solved a similar problem, and I think I can add some clarity to both the question and Ray M's answer ...

A similar two-drive RAID5 running superblock version 1.2 is, like Ray M said, binary identical to RAID1. However, unlike those with v0.9, arrays with 1.x metadata can leave a gap between the start of the device and the start of array data. So, in order to verify the equivalence of the two devices, first determine the data-offset of each component:

mdadm --examine /dev/sdb1
mdadm --examine /dev/sdc1

You should see a line like this for each device (by default the same for both):

Data Offset : 262144 sectors

This is where device data begins. Starting here, copy some data:

dd if=/dev/sdb1 skip=262144 bs=512 count=1024 of=/tmp/b1
dd if=/dev/sdc1 skip=262144 bs=512 count=1024 of=/tmp/c1

The md5 sums should be equal:

$ md5sum /tmp/b1 /tmp/c1
6b327bb46f25587806d11d50f95ff29b  /tmp/b1
6b327bb46f25587806d11d50f95ff29b  /tmp/c1

Since the drives are in fact mirrors of each other, you can either fail and remove one drive, leaving the array degraded:

mdadm /dev/md1 --fail /dev/sdc1 --remove /dev/sdc1
mdadm --zero-superblock /dev/sdc1

Or start fresh with a single-disk RAID1:

mdadm --stop /dev/md1
mdadm --create /dev/md1 --level 1 --raid-devices 1 --force /dev/sdb1
mdadm --zero-superblock /dev/sdc1

Note that despite a two-disk RAID5's binary equivalence to RAID1, you cannot simply:

mdadm --grow --raid-devices 1

with a two-drive RAID5.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .