I have an Asustor AS1004T NAS with 3 WD Red 3TB's in RAID 5. (WDC WD30EFRX-68EUZN0 in case anyone cares). This gives me a total volume size of 5.36TB.
Recently one of the WD's became faulty and I RMA'd it and put in a spare WD Red I had lying around. Added that into the RAID array and now I have a healthy RAID again. Or so it seems.
The volume is showing as having 239TB free of 5.36TB.
In windows:
In the Asustor ADM interface:
This is the output of df -Th; cat /proc/mdstat
:
Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 250M 12K 250M 1% /tmp
/dev/md0 ext4 2.0G 352M 1.5G 19% /volume0
/dev/loop0 ext4 951K 13K 918K 2% /share
/dev/md1 ext4 5.4T -235T 240T - /volume1
cgroup tmpfs 250M 0 250M 0% /sys/fs/cgroup
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid5 sdd4[3] sdb4[4] sdc4[1]
5851357184 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md126 : active raid1 sdd3[6] sdb3[7] sdc3[4]
2095104 blocks super 1.2 [4/3] [UU_U]
md0 : active raid1 sdd2[6] sdb2[4]
2095104 blocks super 1.2 [4/2] [UU__]
unused devices: <none>
As you can see the /dev/md1/ volume does indeed look messed up.
How can I fix this?
Additional information which may be useful: It was a while ago but I seem to remember that I put the drives back in the 'wrong' order (not their original slot) during the process. But the NAS didn't seem to mind and it still recognised the drives with their original RAID drive number. I put them back in the correct order later but this does feel like it could be related to the problem.
Is there some Linux shell magic that I could do to make it re-check the free space and register it correctly again?