Original title: Unwanted, surprise LVM disk mapping causing problems with linux raid
I've done linux raid on ubuntu for almost a decade but never messed with LVM although I'm vaguely aware it exists.
I added a second array on my server recently, got it fully functional, copied some files to it (it's just a backup system, no critical files at risk here).
A week passed, and when I looked back at the backup array I saw that it is read only. Furthermore the devices shown in /proc/mdstat for the array, rather than being /dev/sdX as they were when it was working and I was using it before, look like /dev/dm-X.
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md1 : active (read-only) raid5 dm-0[0] dm-2[3] dm-1[1]
31249539072 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/117 pages [0KB], 65536KB chunk
md0 : active raid6 sdj1[6] sdm1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
29296547840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
bitmap: 0/44 pages [0KB], 65536KB chunk
unused devices: <none>
From googling I found that dm-X devices are LVM disk mapping. dmsetup shows it's the disks I expect
# dmsetup ls
osprober-linux-sdh1 (253:1)
osprober-linux-sdi1 (253:2)
osprober-linux-sdg1 (253:0)
# lvdisplay -a
# lvs
# pvs
#
My understanding was that LVM was something you had to very explicitly plan to use and set up? As far as I am aware above are the first LVM-related command I've ever run in my life so I'm at a loss for how this became enabled and specifically only for 3 devices in my new md array (but no other devices on or off my other array).
To set things up, I manually partitioned the devices with parted, manually created an raid array with mdadm, formatted the array to ext4, and copied files to it, and then just left it there. None of this involved LVM and the devices assembled with mdadm were the standard disk partitions (/dev/sdX1) I had just created. The only operations I can think of acting on any related devices in the week since were running mdadm --assemble --scan
to find the md volumes and their devices dynamically, and mount -a
to remount fstab entries.
Perhaps I did something else I forgot although it couldn't have been some specific LVM configuration because I don't have a clue how to even do that. I did some experimenting with prometheus and node-exporter, I wonder if this could be a side effect of that?
The problem it is causing is that LVM is locking up the actual /dev/sdX "files" so I now cannot use them to manually assemble the array
# mdadm --assemble /dev/md1 /dev/sdg1 /dev/sdh1 /dev/sdi1
mdadm: /dev/sdg1 is busy - skipping
mdadm: /dev/sdh1 is busy - skipping
mdadm: /dev/sdi1 is busy - skipping
I can only either manually assemble them to the dm-X devices
# mdadm --assemble /dev/md1 /dev/dm-0 /dev/dm-1 /dev/dm-2
mdadm: /dev/md1 has been started with 3 drives.
or auto assemble it
# mdadm --assemble --scan
mdadm: /dev/md/1 has been started with 3 drives.
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md1 : active (read-only) raid5 dm-0[0] dm-2[3] dm-1[1]
31249539072 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/117 pages [0KB], 65536KB chunk
For whatever reason the dm-X devices are (effectively or formally) read only, and thus the array mounted from them is also read only.
Not really understanding what I'm looking at or how it got this way, I'm not certain how to phrase the concluding question here, but I want to mount my array in read/write mode, and to do it as I always have, without any involvement from LVM. How can I do that?
grub-mkconfig
/update-grub
). No idea why it set up device mappers over them. (Have stayed away from grub conf generation for a long time myself.)