1

My openmediavault server's ssd disk died, and I replaced it with a new one (different brand, same capacity). Now I wanted to restore my last backup made with fsarchiver via the omv backup plugin, and I'm following this guide. After following the first 13 steps, I'm stuck with the last 2, where the critical things are done.

These were the partitions on my new ssd nvme disk before trying to restore (I had installed OMV on it):

Device         Boot     Start       End   Sectors   Size Id Type
/dev/nvme0n1p1           2048 486395903 486393856 231.9G 83 Linux
/dev/nvme0n1p2      486397950 488396799   1998850   976M  5 Extended
/dev/nvme0n1p5      486397952 488396799   1998848   976M 82 Linux swap / Solaris

After I ran the "restore grub and the partition table" step:

dd if=/mnt/array/Backup/omvbackup/backup-omv-30-ago-2021_03-00-01.grubparts of=/dev/nvme0n1

Now it looks like this:

Device         Boot Start       End   Sectors   Size Id Type
/dev/nvme0n1p1          1 488397167 488397167 232.9G ee GPT

And when I try to restore the main partition:

fsarchiver restfs backup-omv-30-ago-2021_03-00-01.fsa id=0,dest=/dev/nvme0n1p1

I get the following error:

oper_restore.c#152,convert_argv_to_strdicos(): "/dev/nvme0n1p1" is not a valid block device

So I think I messed the partition table. Maybe the grubparts are not written to /dev/nvme0n1 but to other place? Before trying to restore the partition table I could see GRUB installed with:

dd bs=512 count=1 if=/dev/nvme0n1 2>/dev/null | strings

But I can't see that anymore.

Edit: sizes of the different backup files:

-rw-r--r-- 1 root users        818 *.blkid
-rw-r--r-- 1 root users        590 *.fdisk
-rw-r--r-- 1 root users 5226895118 *.fsa
-rw-r--r-- 1 root users        446 *.grub
-rw-r--r-- 1 root users       1408 *.grubparts
-rw-r--r-- 1 root users       1035 *.packages
2
  • How large is the grubparts file? (In bytes.) Are there any other small files nearby? Commented Sep 17, 2021 at 9:51
  • Just updated the question to include those.
    – dvilela
    Commented Sep 17, 2021 at 14:40

3 Answers 3

1

It looks like the .grubparts file is from the wrong disk. Your "old" partition list shows a normal MBR-format partition table, but what you restored looks like the "protective MBR" that is normally found on GPT-partitioned disks – it has the special partition of type 0xEE that usually indicates "you shouldn't be looking here, you should be looking at the GPT in sector 1 instead".

(The MBR is in sector 0, while the 'main' GPT occupies sectors 1-33 and the 'backup' GPT is at the end of the disk.)

Also, GPT disks are typically used with UEFI firmware, and the EFI boot process doesn't use the "boot sector" – it is normal for the protective MBR to be accompanied by a completely blank boot code area. (The bootloader for EFI systems is stored as a regular file in a regular partition.)

There are two options:

  • Look for another file that might have the correct partition table.

    (Also, after restoring a partition table using dd, you might need to explicitly tell the kernel to rescan it – otherwise the /dev nodes won't appear on their own. This can be done using partx -u, or partprobe, or by running fdisk and asking it to 'w'rite the partitions it found.)

  • Manually rebuild the partition table from scratch, by creating partitions using the "start" and "end" sector numbers that you conveniently have in the old 'fdisk' output.

    (You don't need to manually create the "extended" partition, just p1 as "primary" and p5 as "logical".)

1
  • I'll have a look at that then. So the grubpart's dd command's target (/dev/nvme0n1) was right? Despite the partition type being wrong, of course.
    – dvilela
    Commented Sep 17, 2021 at 14:45
0

I've reinstalled OMV again in UEFI mode to recover the partition table, and since the new ssd has the same capacity as the old one, I skipped the grub restoration part and directly restored the filesystem backup with fsarchive.

0

I've had the same after the fsarchiver restore of OMV6 on Raspberry OS. The restored system always booted in read-only mode. Remounting in r/w mode worked though. The issue (in my case) was actually pretty simple: /etc/fstab references the file systems by partuuid instead of /dev/mmcblk0p****.

What I did:

  • Remount root as r/w
  • Replace PARTUUID=xyz with /dev/mmcblk0p*** in /etc/fstab
  • Reboot

I suppose this means the partuuid after the restore are different from the one in the backed up file /etc/fstab.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .