0

I've got a working Legacy/UEFI boot stick with Xubuntu focal. All is well with it. Now I'm creating clones, and for that purpose, I've changed stuff on each:

  • hostname
  • UUIDs of partitions
  • UUIDs listed in /etc/fstab for those partitions.

These sticks will not boot because the UUIDs in GRUB have not changed, and I'm fairly baffled by the way GRUB records them, and regard updating them myself as too error-prone to try.

So since I can mount those sticks on a running system, I'd like to chroot into the stick and run update-grub in the chroot process. The examples I've seen don't allow the process to access devices and such, and I don't know how to properly set that up. I'm guessing I want access to at least /dev and maybe /sys and or /proc while in the chroot. All three are sort of "fake" in the original process, not part of the filesystem on any drive.

Any pointers? Is there a different approach to getting these clones to boot?

Anyone know how to make it boot when GRUB goes into it's emergency mode?

2 Answers 2

0

Hmmm. I think I found an answer at this link: https://docs.rackspace.com/support/how-to/mount-a-partition-and-chroot-into-your-primary-file-system-from-rescue-mode/

It seems to be focused on particulars of a certain kind of server cluster, but the basics look usable. I'll report back if it works.

LATER: Yes! It worked, with a little tweaking, and ignoring stuff that didn't apply.

In short: If it is set up with /etc/hostname and /etc/fstab set up as they should be, (for me that meant /etc/fstab is using UUIDs to identify drives, and they are accurate) and then this should work:

  1. Make sure any required mounts have been made. In my case that meant mounting the root partition on /mnt, and the boot partition on /mnt/boot.

  2. In a root shell, enter the code

mount --types proc none /mnt/proc
mount --rbind /sys      /mnt/sys
mount --rbind /dev      /mnt/dev
chroot /mnt /bin/bash

And you should be in the chroot environment. You can easily check by comparing the output of hostname with the contents of /etc/hostname.

The key thing I did in the chroot was "update-grub", which got all the right UUIDs onto things. Then I immediately quit the chroot, and rather than worry about how to undo the mount commands, I just did a reboot, and selected the new drive. It was fine.

The other thing was to run that drive without any other drives (I just disconnected power from the others and did a "partprobe") and run "update-grub" that way to get a clean boot menu with no artifacts of the system I used for all of this.

1
  • Yes! It took a little tweaking for my situation, but it worked!
    – 4dummies
    Commented Jan 28, 2021 at 1:38
0

I also found what may be a better solution. It solved the problem on another drive which for some reason would not run update-grub, or even grub-install from another drive.

After changing the UUIDs of the partitions, and correcting /etc/fstab for those changes, I booted a thumb drive that had GRUB boot rescue. Lo and behold, it recognized the drive, and fairly quickly had rebuilt a usable grub configuration. The drive booted just fine after that.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .