0

I have many years of UNIX experience, but I think i made a fatal mistake:

Seeing messages during boot about non-empty mount points, I took the chance to clean them up while migration a system to a new environment. I booted the SLES Rescue System, mounted the root filesystem and the basic system structure (proc, sys, dev) on /mnt, then did chroot /mnt and mount -va.

So everything looked nice, I adjusted some configuration settings and while unmounting I checked the mount points. For example: /var/cache was mounted, umount succeeded, but still ll /var/cache showed a non-empty mount point while the filesystem was displayed not mounted. So I removed the contents.

Basically I repeated the steps for every filesystem mounted, then left the chroot environment, umounted the rest and rebooted.

Unfortunately the system won't boot as GRUB complained it cannot find normal.mod.

So is this a feature of btrfs subvolumes? Who can explain what was going on?

/etc/fstab

As I was asked for /etc/fstab, here is what a typical system has:

/dev/sys/root                              /            btrfs  defaults              0  0
/dev/sys/root                              /var         btrfs  subvol=/@/var         0  0
/dev/sys/root                              /usr/local   btrfs  subvol=/@/usr/local   0  0
/dev/sys/root                              /tmp         btrfs  subvol=/@/tmp         0  0
/dev/sys/root                              /srv         btrfs  subvol=/@/srv         0  0
/dev/sys/root                              /root        btrfs  subvol=/@/root        0  0
/dev/sys/root                              /opt         btrfs  subvol=/@/opt         0  0
UUID=9fd27493-d194-48ba-a4bc-3551123e0d3b  /home        xfs    defaults              0  0
/dev/sys/boot                              /boot        btrfs  defaults              0  0
UUID=0092-D1D5                             /boot/efi    vfat   utf8                  0  2
/dev/sys/root                              /.snapshots  btrfs  subvol=/@/.snapshots  0  0
2
  • 1
    Without seeing your fstab I can only guess (so not an answer). Suppose there is a subvolume named boot. Suppose you mount subvol=/whatever as / and then subvol=/whatever/boot as /boot/. If you umount /boot then you will still see its content – not as a mounted subvolume but as a subvolume that exists under /whatever in the Btrfs subvolume tree (see this answer if this is confusing to you). This is a possible scenario where files in /boot are the same regardless if /boot is mounted. Commented Jun 25, 2022 at 11:04
  • As I has such experience before, I suspect systemd silently remounting the filesystem, so I did not clean up the mountpoint (as expected), but cleaned the actual mounted filesystem. Unfortunately I cannot prove any more.
    – U. Windl
    Commented Jun 4, 2023 at 19:32

1 Answer 1

1

How it worked

/dev/sys/root contains a Btrfs filesystem. Several entries in your fstab mount different subvolumes of it to different mountpoints.

Hypothesis: the default subvolume in the Btrfs filesystem in /dev/sys/root is /@ and once it's mounted at /, mounting many other subvolumes at their particular mount points makes little to no sense because the right (and usually non-empty) directories are already where you expect them.

(To see the default subvolume, run sudo btrfs subvolume get-default /.)

Please read this another answer of mine and understand the conceptual difference between the Btrfs directory (and subvolumes) tree on a device and the directory structure in your OS. Without this knowledge the rest of the current answer may be very confusing to you.

If the default subvolume is /@, /@ from the Btrfs tree appears as / in the OS tree along with everything beneath. This mounting alone is enough to see e.g. the directory /@/var from the Btrfs tree as /var in the OS tree. I mean if you try to access /var before /var is mounted then

  • the OS will check if /var (of the OS) is a mountpoint; it's not;
  • so the OS will check if / (of the OS) is a mountpoint; it is and it's associated with /@ of the Btrfs;
  • so the OS will show you /@/var of the Btrfs as /var of the OS.

It so happens /@/var of the Btrfs is a subvolume. Additionally mounting subvol=/@/var at /var (of the OS) will make the same directory appear as /var (of the OS). Specifically, if you now access /var then

  • the OS will check if /var (of the OS) is a mountpoint; it is and it's associated with /@/var of the Btrfs;
  • so the OS will show you /@/var of the Btrfs.

And if you unmount /var (of the OS) then you will go back to the first case.

So it doesn't matter if /var (of the OS) is mounted or not (virtually it doesn't matter, there may be subtleties). One way or another you will see /@/var of the Btrfs as /var in the OS.

Several other entries in your fstab mount different subvolumes where you would see them anyway. I don't really see the point of these entries (some subtlety I cannot identify at the moment may be the point; or these entries are totally pointless indeed).

For comparison: if /var (of the OS) was mounted with subvol=/var then mounting it would make sense because /var of the Btrfs is not under /@ of the Btrfs and thus you cannot reach it just by mounting /@ of the Btrfs as / of the OS.


How you broke it

… was mounted, umount succeeded, but still ll … showed a non-empty mount point while the filesystem was displayed not mounted. So I removed the contents.

As I said, several entries in your fstab mount different subvolumes where you would see them anyway. In these cases by removing the contents after umount you removed the contents you saw before umount. You thought you removed shadowed, irrelevant contents, but actually you removed the actual, important contents.

There are two things that don't fit though:

  1. Your example mountpoint is /var/cache, you allegedly unmounted it; but there is no /var/cache in the fstab you posted.
  2. According you the fstab you posted, while / (of the OS) comes from /dev/sys/root, /boot (of the OS) comes from /dev/sys/boot. The described mechanism of breaking things cannot apply. Yet you claim that "GRUB complained it cannot find normal.mod" and from this I deduce you did break things under /boot somehow.

Therefore I suspect the published fstab (you described it as "what a typical system has") is not the exact fstab of the affected OS. I suspect /boot there used the same filesystem as /, was unnecessarily mounted and thus prone to the described scenario of breaking things.

1
  • For the fstab I wrote: "here is what a typical system has" So it was not the original fstab. I can only speculate: Possibly I reinstalled the system before you asked for the fstab (as the system was ruined significantly).
    – U. Windl
    Commented Jun 4, 2023 at 23:11

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .