1

I've done a fresh Debian install on a system that had a second disk that was previously used for Proxmox and had some lvm volumes on it.

If I do a lsblk I get:

NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                             8:0    0   1.8T  0 disk 
├─sda1                          8:1    0   1.8T  0 part 
├─crucial2tb-vm--102--disk--1 254:0    0    50G  0 lvm  
├─crucial2tb-vm--102--disk--2 254:1    0    42G  0 lvm  
├─crucial2tb-vm--102--disk--0 254:2    0    16G  0 lvm  
├─crucial2tb-vm--102--disk--3 254:3    0    16G  0 lvm  
├─crucial2tb-vm--102--disk--4 254:4    0    32G  0 lvm  
├─crucial2tb-vm--100--disk--0 254:5    0    32G  0 lvm  
├─crucial2tb-vm--103--disk--0 254:6    0    32G  0 lvm  
└─crucial2tb-vm--104--disk--0 254:7    0     8G  0 lvm  
nvme0n1                       259:0    0 238.5G  0 disk 
├─nvme0n1p1                   259:1    0   512M  0 part /boot/efi
├─nvme0n1p2                   259:2    0   237G  0 part /
└─nvme0n1p3                   259:3    0   977M  0 part [SWAP]

I would like to get rid of all volumes/partitions on /dev/sda and have an empty disk that I can format with ext4. How do I do that?

I did not configure lvm on this new installation, but the system still seems to have discovered the old lvm mounts and seems to think they are in use somehow.

sudo vgs has no output, it doesn't show any volume groups. blkid shows

/dev/mapper/crucial2tb-vm--104--disk--0: UUID="a04ed926-c3b8-4021-b216-c0da516824b1" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/crucial2tb-vm--102--disk--2: PTUUID="a5073dd8" PTTYPE="dos"
/dev/mapper/crucial2tb-vm--103--disk--0: PTUUID="98d651b8" PTTYPE="dos"
.....
/dev/mapper/crucial2tb-vm--100--disk--0: PTTYPE="dos"
/dev/mapper/crucial2tb-vm--102--disk--3: PTUUID="65c2a2ad" PTTYPE="dos"

3 Answers 3

2

Nothing here says they are in use somehow – they're just there. And just like say, partitions on an MS-DOS partitioned hard drive, they are detected. Neat, actually!

Check the output of sudo vgs – you should see your old volume group there.

You can then just vgremove {volume group name} the volume group (since there's still logical volumes on them, you'll probably want to have -ff in that call).

Finally, you're just left with a partition on sda – the underlying physical volume. You can then format that, or delete it and replace it with a different partitioning as you like.

Personally, I think LVM is a good idea; I'd just make a new volume group containing that PV instead of directly going to partitions. Low to no performance overhead, at the full flexibility of being able to later create and delete volumes (and thus, filesystems) as I like, make them span multiple storage devices, strip or mirror them.

2
  • sudo vgs has no output, it doesn't show any volume groups. blkid shows /dev/mapper/* devices.
    – bibac
    Commented Jul 6 at 13:07
  • so, none are active. Great, you can just simply delete the sda1 partition and are rid of the physical volume hosting the VG. Commented Jul 6 at 13:53
1

These were not actual volumes but just device mappers. Either way that caused commands to think the disk is in use.

I've listed the device mapper entries with dmsetup info and then deleted them via e.g. dmsetup remove crucial2tb-vm--102--disk--0.

Now the device was not showed as "in use" anymore and I was able to repartition the complete disk.

1

Use the fdisk utility to create a new partition table. This will wipe the old partitions along with the LVMs. Use sudo fdisk /dev/sda to make changes to the partitions in this drive, type p to print the partitions (you will see only one partition of 1.8T), then type g to create a new GPT partition table. Finally, type w to write all the changes. Keep in mind that this will delete all the partitions and contents from the drive.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .