-1

Original title: Unwanted, surprise LVM disk mapping causing problems with linux raid

I've done linux raid on ubuntu for almost a decade but never messed with LVM although I'm vaguely aware it exists.

I added a second array on my server recently, got it fully functional, copied some files to it (it's just a backup system, no critical files at risk here).

A week passed, and when I looked back at the backup array I saw that it is read only. Furthermore the devices shown in /proc/mdstat for the array, rather than being /dev/sdX as they were when it was working and I was using it before, look like /dev/dm-X.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md1 : active (read-only) raid5 dm-0[0] dm-2[3] dm-1[1]
      31249539072 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/117 pages [0KB], 65536KB chunk

md0 : active raid6 sdj1[6] sdm1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
      29296547840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

From googling I found that dm-X devices are LVM disk mapping. dmsetup shows it's the disks I expect

# dmsetup ls
osprober-linux-sdh1 (253:1)
osprober-linux-sdi1 (253:2)
osprober-linux-sdg1 (253:0)
# lvdisplay -a
# lvs
# pvs
#

My understanding was that LVM was something you had to very explicitly plan to use and set up? As far as I am aware above are the first LVM-related command I've ever run in my life so I'm at a loss for how this became enabled and specifically only for 3 devices in my new md array (but no other devices on or off my other array).

To set things up, I manually partitioned the devices with parted, manually created an raid array with mdadm, formatted the array to ext4, and copied files to it, and then just left it there. None of this involved LVM and the devices assembled with mdadm were the standard disk partitions (/dev/sdX1) I had just created. The only operations I can think of acting on any related devices in the week since were running mdadm --assemble --scan to find the md volumes and their devices dynamically, and mount -a to remount fstab entries.

Perhaps I did something else I forgot although it couldn't have been some specific LVM configuration because I don't have a clue how to even do that. I did some experimenting with prometheus and node-exporter, I wonder if this could be a side effect of that?

The problem it is causing is that LVM is locking up the actual /dev/sdX "files" so I now cannot use them to manually assemble the array

# mdadm --assemble /dev/md1 /dev/sdg1 /dev/sdh1 /dev/sdi1
mdadm: /dev/sdg1 is busy - skipping
mdadm: /dev/sdh1 is busy - skipping
mdadm: /dev/sdi1 is busy - skipping

I can only either manually assemble them to the dm-X devices

# mdadm --assemble /dev/md1 /dev/dm-0 /dev/dm-1 /dev/dm-2
mdadm: /dev/md1 has been started with 3 drives.

or auto assemble it

# mdadm --assemble --scan
mdadm: /dev/md/1 has been started with 3 drives.
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md1 : active (read-only) raid5 dm-0[0] dm-2[3] dm-1[1]
      31249539072 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/117 pages [0KB], 65536KB chunk

For whatever reason the dm-X devices are (effectively or formally) read only, and thus the array mounted from them is also read only.

Not really understanding what I'm looking at or how it got this way, I'm not certain how to phrase the concluding question here, but I want to mount my array in read/write mode, and to do it as I always have, without any involvement from LVM. How can I do that?

3
  • It's annoying when you see all the blah blah on how you never heard of lvm but provide no actual full output of any command you mentioned. No one blamed you for anything.
    – Tom Yan
    Commented Jul 30, 2022 at 1:43
  • I've added some command output and tried to reduce all the blah blah blah. Commented Jul 30, 2022 at 6:08
  • Apparently it has something to do with os-prober run (probably triggered with grub-mkconfig / update-grub). No idea why it set up device mappers over them. (Have stayed away from grub conf generation for a long time myself.)
    – Tom Yan
    Commented Jul 30, 2022 at 8:11

2 Answers 2

1

There is no evidence you are using LVM here, and dm-x devices are not necessarily, or even usually LVM maps. "dm"is simply a device mapping and does not say anything about the device that is being mapped. My guess is these are references to your hard drives.

You can use "blkid /dev/dm-X" to get the UUID of the block device and advice as to the type of device it is. If thats not enough you can use |blkid" by itself to get the UUID and the actual block device - I am sure therre are other ways to do this.

If you want to verify for yourself if you are using LVM (and it would seem unlikely you are - if you assume your DM devices are LVM devices your setup does not make sense as LVM would in most sane setups either sit on top of raid or replace RAID - they would not provide the block devices for RAID), you can use the command "pvdisplay" which will show the block devices which have been co-opted into RAID.

1

Thanks to comments and other answers it was made concrete to me that there was no real LVM activity occuring, which this answer (among others) had lead me to believe.

It had seemed hard to swallow that there was actual LVM in use here since of what little I knew about LVM I didn't think it was something you could just accidentally "slip and fall into", and having never used it I lack the knoweldge to set it up without doing explicit research. Yet in spite of that feeling the "paradox" was that everything I searched up about dm-X devices kept pointing to LVM.

After seeing the null output of every LVM informational command I learned about (lvs pvs lvdisplay pvdisplay), I focused my searching on device mapper instead of LVM.

The solution was to use dmsetup to simply remove the device mappings

# dmsetup remove /dev/dm-0
# dmsetup remove /dev/dm-1
# dmsetup remove /dev/dm-2
# ls /dev
ls: cannot access '/dev/dm*': No such file or directory
# mdadm --assemble /dev/md1 /dev/sdg1 /dev/sdh1 /dev/sdi1
mdadm: /dev/md1 has been started with 3 drives.
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md1 : active raid5 sdg1[0] sdi1[3] sdh1[1]
      31249539072 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/117 pages [0KB], 65536KB chunk 

The array is now able to assemble with the actual devices and is no longer in read/write mode.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .