1

I have a failed raid-5 array that I can't seem to recover. Basically the story is I had this data in raid 5 and I was using LVM which has built-in raid now. I noticed one of the disks going bad so I got a new one and issued pvmove to move the extents from the failing disk to the new disk. Some time during the migration, the old disk failed and completely stopped responding (not sure why it would cause that). So I rebooted it and now the array doesn't come up at all. Everything looks well enough, e.g. 3/4 disks are working, and I'm pretty sure even the failed one is back up temporarily (don't trust it though). But when I issue lvchange -a y vg-array/array-data I get a failure with the following in dmesg

not clean -- starting background reconstruction
device dm-12 operational as raid disk 1
device dm-14 operational as raid disk 2
device dm-16 operational as raid disk 3
cannot start dirty degraded array.

I'm pretty sure there are ways to force the start using mdadm but I havent seen anything for lvm. But since I have three disks, all my data is there so it must be recoverable. Does anyone know how to do it?

2 Answers 2

1

The solution to this is to add the kernel boot parameter

md-mod.start_dirty_degraded=1

To your /etc/default/grub then update-grub and reboot. I still had to activate the volume manually but after adding that parameter dirty degraded arrays are now a warning and not an error.

This is documented at https://www.kernel.org/doc/html/latest/admin-guide/md.html#boot-time-assembly-of-degraded-dirty-arrays

0

After adding new device, create a PV on it, add the PV to volumegroup with vgextend, and remove missing PVs using vgreduce --remove --force, I could repair my raid6 array on LVM using the following command

lvconvert --repair <vgname>/<lvname>

We can see the progress of repairing by lvs, which will show it in Cpy%Sync.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .