1

Here's my problem:

I have a Virtual Machine that I had to reset because the UI became very buggy after I tried to mount a hot-pluggable virtual disk image to copy some files. Well that was actually premature, should not have done that - but shit sometimes happens.

The Virtual Disk is encrypted with luks1 and the file system used is btrfs. I also have backed up the virtual disk in working state. I made some changes to the certain files (a password db and my firefox profile & some other documents) that are not in my last backup. Therefore I would like to restore my btrfs partition

When I enter the password the following error appears (when I start up the vm):


Enter passphrase for hd0,gpt2 (91188ec2-c31a-4812-a8c8-654cf5c793fb): Attempting to decrypt master key... Slot 0 opened error: unknown filesystem grub rescue>

=======================================================================

I tried to restore the partition using a live-system but the partition is not mountable

Since I backed up the corrupted virtual disk to a safe location. I am capable to start over if something goes sideways. Just need to copy the corrupted virtual disk from backup location to working directory of the VM. So creating an image of my vm partition won't be necessary

GParted shows the follwing errors when I open the information window for my btrfs partition. So it does not recognize the filesystem.


Encryption: luks Path: /dev/mapper/sda2_crypt UUID: 91188ec2-c31a-4812-a8c8-654cf5c793fb Status: Open

Partition Path /dev/sda2 1st Sector: 618496 Last Sector: 188731619 Total Sectors: 188113124

Warning: Unable to detect file system! Possible reasons are:

  • The file system is damaged
  • The file system is unknown to GParted
  • There is no file system available (unformatted)
  • The device entry /dev/mapper/sda_crypt2 is missing

GParted vm disk image corrupted


I used btrfs-tools and testdisk:

Command: btrfs rescue super-recover /dev/mapper/sda2_crypt

Output: No valid btrfs found on /dev/mapper/sda2_crypt

=======================================================================

Additional Info's:

The ssd's of my host system and it's partitions are in healthy state - so no issues here! The vm is an efi system & uses gpt as partition table!

The screenshots from GParted show the disk before the data corruption occurred

GParted vm disk image healthy state

dmesg shows an this error message when I attempt to mount the my btrfs partition with this command:

mount /dev/mapper/sda2_crypt or /dev/sda1

/dev/mapper/sda2_crypt: Can't open blockdev /dev/sda1: Can't open blockdev


Command:dmesg | tail

Output: systemd-gpt-auto-generator[3165]: (the boot loader did not set EFI variable LoaderDevicePartUUID.)

systemd-gpt-auto-generator[4398]: EFI loader partition unknown, exiting.

This output shows a few time with different numbers between the [].


Command: hexdump /dev/sda2 | grep LUKS

Output: 00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....|

So I guess the luks header is still there!


The testdisk results: see attached Code (testdisk log!)

I would like to restore the partition to working state.

Help would be appreciated

Attachments:

testdisk log:

https://pastebin.com/YcPNEcm9

=======================================================================

Additional Info's [26th May 2024]:

I recreated the problem on a copy of my working vm disk image.

As I stated above tried to mount a hot-pluggable virtual disk image for backing up important files, while the vm was running.

The UI / Caja (the File Manager) became very buggy. I decided to shut the vm down, which lead to the following errors.

See Screenshot below

VM Shutdown Errors

The system tried to write changes to the system disk but the system disk was somehow in a read-only state.

If my understanding of btrfs is correct. Then it is possible that entries were written in the journal but the changes were not made to the data on the disk.

I backup the luks header as suggested and some files from the root filesystem like the folder 'boot', 'etc' and the files cryptfile.bin, initrd.img, initrd.img.old, vmlinuz, vmlinuz.old

3
  • 1
    Maybe you should try to extract the LUKS header from the backup and replace / overwrite the one with it on the damaged image. (Better make a backup / copy of it first.)
    – Tom Yan
    Commented May 23 at 1:53
  • @Tom Yan I did what you suggested but it had no effect. The issue still there Commented May 26 at 19:29
  • I noticed that my working VM Image shows the text 'Welcome to GRUB!' when I start the VM and asks me to enter my passphrase The corrupt VM Image is not showing that welcome message above, which is kinda odd. I think there is also s.th. wrong with GRUB in addition to the filesystem corruption of the root partition Commented May 26 at 20:41

0

You must log in to answer this question.