I wanted to grew a Raid0 from 2 disks to 3 (each 1TB EBS on Amazon AWS) but the size did not change thereafter. The RAID started with two disks sdc and sdd and the new one was sdf.
Here is the grow-command:
sudo mdadm --grow /dev/md0 --raid-devices=3 --add /dev/xvdf
after some hours mdstats showed the following info (using cat /proc/mdstat
):
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid0 xvdf[3] xvdd[1] xvdc[0]
3221223936 blocks super 1.2 512k chunks
so I hoped it worked but df -h
gave me:
Filesystem Size Used Avail Use% Mounted on
/dev/md0 2.0T 1.6T 297G 85% /mnt/md0
and sudo mdadm --detail /dev/md0
showed:
/dev/md0:
Version : 1.2
Creation Time : Tue Jul 22 16:05:40 2014
Raid Level : raid0
Array Size : 3221223936 (3072.00 GiB 3298.53 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Sep 7 01:37:39 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Number Major Minor RaidDevice State
0 202 32 0 active sync /dev/sdc
1 202 48 1 active sync /dev/sdd
3 202 80 2 active sync /dev/sdf
so the RAID seem to have 3 devices and the correct size (3072 GB) but df
does not show this. Strangely, the new disk (sdf) is listed as number 3 and number 2 is not listed.
Note: I did use a "blank" EBS without any formatting - should new RAID disks be formatted before added to a (already formatted) RAID?
What am I missing? Do I have to (partially) format the new RAID disk (there is still data on the RAID I need (but I have a Backup))? Is df
misreading the RAID or is the RAID grow not finished yet?