2

My server is set up with raid1. A few nights ago sda fell out completely and data was corrupted. I replaced the drive, cloned the partition table and added the respective arrays. While adding sda3 (MD2) the resync kept failing due to sdb having I/O errors. I copied all the files I could save from sdb3 to sda3, reconfigured raid and replaced sdb with a new drive. I'm now adding the sdb partitions to the arrays. My concern is as follows:

cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md3 : active raid1 sda4[0]
      1822442815 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sda3[1]
      1073740664 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[2] sda2[0]
      524276 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[2] sda1[3]
      33553336 blocks super 1.2 [2/2] [UU]

Both md0 and md1 displayed as [U_] prior to syncing, why would md2 display as [_U]? I fear losing the data when adding sdb3. My thinking here is the the first slot ([U_]) is seen as primary by mdadm and the second slot ([_U]) as secondary hence the fear of the data being removed to match sdb3.

Please advise.

Thanks.

1 Answer 1

2

I would not be concerned. I suspect what happened here was that md3 was created using a command like

mdadm --create /dev/md3 -l 1 -n 2 /dev/sda4 /dev/sdb4

and the other was

mdadm --create /dev/md2 -l 1 -n 2 /dev/sdb3 /dev/sda3

Notice that your other two arrays (md0 and md1) have the sdb,sda order as well.

If you want to be super-paranoid go ahead and back up your files to an external drive, but I suspect when you finally get around to doing

mdadm /dev/md2 -a /dev/sdb3

the recovery will proceed smoothly as the new partition (/dev/sdb3) is synchronized to the existing partition (/dev/sda3). The position in the list is of no importance. linux software RAID remembers what was valid and what the newest (incompletely synchronized) partition is.

0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .