2

Please note the following setup is Centos 6.6.

I have an existing RAID 1 setup using 2x480GB SSD's. I Just had two new 800GB SSD's installed into the server. The existing RAID 1 setup i want to extend is mapped to /dev/md2. /dev/md2 points to my /home directory at the moment. My existing /home directory is getting full. I want to extend the size of it from 460 GB (existing /dev/md2) to 1260 GB (existing /dev/md2 + two new drives)

The new disks are setup as /dev/sdc and /dev/sdd. The old disks are /dev/sda and /dev/sdb.

I found a few different guides about extending RAID setups but I am confused if this will work since I am adding different sized disks to the setup. I also am not sure if this will play nice since I want to add two disks to the setup instead of just 1.

An example of a guide I found: http://www.tecmint.com/grow-raid-array-in-linux/

Would I just run:

mdadm --manage /dev/md2 --add /dev/XXX

Two times for each of the disks before running:

mdadm --grow --raid-devices=4 /dev/md2

Would this properly setup the new 800GB SSD's to work in existence with the 480GB drives that are setup in RAID 1? Will Linux know to properly duplicate data across the new drives and not interfere with the existing drives?

EDIT: I need to do this all live. Forgot to mention that.

1
  • This will not grow the available space. It will only add more copies to the RAID1. If you are using LVM just create a new md-device on the new disks and add the new md device as PV to the VG.
    – Martian
    Commented Apr 5, 2016 at 7:46

1 Answer 1

3

Yes, you can add the two new larger drives with mdadm as you describe, but the process involves a few more steps.

Note: After you have extended the array, you also resize your partition or LVM you (might) have on top of the raid array before you can grow your filesystem. Depending on your filesystem, this might be done online.

To demonstrate the steps, I first create a raid device with two 100Mbyte files:

# mdadm --create --level=1 --raid-devices=2 --metadata=1.2 /dev/md2 /dev/loop0 /dev/loop1
mdadm: array /dev/md2 started.

# cat /proc/mdstat
md2 : active raid1 loop1[1] loop0[0]
      102272 blocks super 1.2 [2/2] [UU]

Then I add two 200Mbyte devices to the array, they will show up as spares:

# mdadm --manage /dev/md2 --add /dev/loop2 /dev/loop3
mdadm: added /dev/loop2
mdadm: added /dev/loop3

# cat /proc/mdstat
md2 : active raid1 loop3[3](S) loop2[2](S) loop1[1] loop0[0]
      102272 blocks super 1.2 [2/2] [UU]

Grow the raid to 4 disks. After syncing is complete, the array now has 4 mirrors:

# mdadm --grow --raid-devices=4 /dev/md2
raid_disks for /dev/md2 set to 4

# cat /proc/mdstat
md2 : active raid1 loop3[3] loop2[2] loop1[1] loop0[0]
      102272 blocks super 1.2 [4/4] [UUUU]

Fail the two smaller devices and remove them and change the number of active devices to two:

# mdadm --manage --fail /dev/md2 /dev/loop0 /dev/loop1
mdadm: set /dev/loop0 faulty in /dev/md2
mdadm: set /dev/loop1 faulty in /dev/md2

# mdadm --manage --remove /dev/md2 /dev/loop0 /dev/loop1
mdadm: hot removed /dev/loop0 from /dev/md2
mdadm: hot removed /dev/loop1 from /dev/md2

# mdadm --grow --raid-devices=2 /dev/md2
raid_disks for /dev/md2 set to 2

The final step for the raid device is to grow the array to span the entire size of the two larger disks:

# mdadm --grow --size=max /dev/md2
mdadm: component size of /dev/md2 has been set to 204720K

dmesg will say:

md2: detected capacity change from 104726528 to 209633280

..and the device will sync again. You should now have a raid device with a new size:

# cat /proc/mdstat
md2 : active raid1 loop3[3] loop2[2]
      204720 blocks super 1.2 [2/2] [UU]

You now need to resize any partition and/or LVM and after that, you can grow your filesystem.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .