0

I resized the root filesystem on Fedora 29 in VirtualBox from 15GB to 20GB but can no longer get it to boot. It always lands in the dracut emergency shell. After first resizing the VDI and the snapshots in VirtualBox, I enlarged the disks as per steps 1.1 to 1.11 according to the accepted answer on this thread. I was executing step 2 (reboot) when I hit the problem.

Running blkid I see /dev/sda1 and /dev/sda2. I have mounted /dev/sda1 in Fedora Live and can see that it is the boot partition, so can edit the grub configuration if I wanted to, but I can find no way of mounting /dev/sda2. LVM reports no devices.

There is a difference in the output of blkid for the two devices. While /dev/sda1 reports UUID, TYPE and PARTUUID I only see PARTUUID on /dev/sda2. When I deleted and re-created the partition with fdisk, did it wipe out some metadata that LVM needs? Any idea how I can get this system booting again?

4
  • In 1.8, did you take care to use the very same starting sector as before? or did you just hit Enter? The answer makes a silent (inelegant IMO) assumption the default values are right. Compare this answer where it says The partition in question was recreated larger but its starting sector remained. In your case LVM is an additional layer but the general rule stands: enlarging a partition to the right by deleting and re-creating it requires the left end (starting sector) to stay put. Is it possible you changed the starting sector? Commented Jan 20, 2020 at 0:34
  • Yes the starting sector was exactly right, I triple-checked it before I wrote the changes and also confirmed it again from Fedora Live once boot-up had been broken. Definitely a good question to ask, but not the problem in this case.
    – chapter34
    Commented Jan 20, 2020 at 19:17
  • I have got the system booting now, I ended up comparing the first couple of KB of the disk device using dd with that of the same device from an earlier snapshot taken a month ago when I was still running FC23. I found that the lvm_type field in the LABEL_HEADER had been zeroed out! I'm not sure why fdisk would have done that and I wasn't sure if there was a "proper" way to fix it, but a quick binary edit and dd the modified block back to disk and hey presto the missing volumes reappeared in /dev/mapper and could be mounted. I will try to reproduce this and post a proper analysis.
    – chapter34
    Commented Jan 20, 2020 at 19:27
  • You are not saying what exactly you did to extend your filesystem. Also the exact error message is missing.
    – U. Windl
    Commented Mar 30, 2022 at 12:27

1 Answer 1

0

I'm going to answer my own question just in case others hit the same problem in the future. The primary reason I encountered this issue was because fdisk has been modified slightly since the post I was following and so I was asked a question that I was not expecting (i.e. a question not mentioned in the instructions that I was following) and I also answered that question poorly.

fdisk will now alert you to the presence of an LVM2_member signature and will ask you if you wish to remove it:

Partition #2 contains a LVM2_member signature.

Do you want to remove the signature? [Y]es/[N]o:

Unfortunately, and for reasons I won't go into now but you can read my full blog post on it that I'll link to below, I answered Y to this. Unsurprisingly, this then zeroed the 64-bit lvm_type field in the LABEL_HEADER on the disk, leaving me with a system that would no longer boot and would instead dump me rather unceremoniously inside the dracut emergency shell.

The solution was to write that 64-bit header back to the disk. I couldn't find a tool that would do this for me, so I used dd from within Fedora Live to read the first couple of blocks from the disk into a file, I edited the file with vi -b and then wrote it back to the disk again with dd. I knew exactly which bytes to edit because I compared the output of od -Ax -tx1z -v <filename> with a copy from an earlier snapshot that I had available. Note that the Fedora Live image I was working in did not have xxd available. After re-adding the missing signature, LVM immediately recognised the disk and I was able to mount my volumes again.

Because the full explanation and procedure is quite long, I blogged about it. If you've found yourself in a similar predicament, you might find my notes useful which you can view here.

1
  • Oh well, if you use LVM and wipe the PV signature from the disk, it's nor surprising that the kernel cannot find the LVs (also PV, VG) any more. BTW: LVM creates backups in /etc/lvm/backup automatically after a change (using vgcfgbackup) that you can restore using vgcfgrestore.
    – U. Windl
    Commented Mar 30, 2022 at 12:32

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .