On my Ubuntu 20.04.3 LTS server running as KVM VM, I have defined 4 custom mount points in order to access the host's filesystem. Two of them are 9p mounts and two of them are bindfs mounts that are used to fix the 9p mount (i.e. mapping server user / group to the local ones and fixing permissions). Here's the /etc/fstab
snippet of the four mounts:
gdrive /mnt/gdrive 9p rw,sync,_netdev,trans=virtio,version=9p2000.L 0 0
/mnt/gdrive /home/user/gdrive fuse.bindfs perms=0775,force-user=user,force-group=user,x-systemd.requires=/mnt/gdrive 0 0
dump /mnt/storage 9p rw,sync,_netdev,trans=virtio,version=9p2000.L 0 0
/mnt/storage /home/user/storage fuse.bindfs perms=0775,force-user=user,force-group=user,x-systemd.requires=/mnt/storage 0 0
Both 9p mounts are working correctly across reboots, but one of the bindfs mounts fails consistently. Its either of them, they never both work or fail together. I suspect system autogeneration of service files from fstab, but no idea how to debug or verify.
Digging dmesg I found the following (on a boot where ~/storage
failed to mount):
user@server:~$ dmesg | grep 'gdrive'
[ 3.842312] systemd[1]: remote-fs-pre.target: Found dependency on home-user-gdrive.mount/start
[ 3.843378] systemd[1]: remote-fs-pre.target: Found dependency on mnt-gdrive.mount/start
Whereas,
user@server:~$ dmesg | grep 'storage'
[ 3.849405] systemd[1]: local-fs.target: Found ordering cycle on home-user-storage.mount/start
[ 3.850602] systemd[1]: local-fs.target: Found dependency on mnt-storage.mount/start
[ 3.857836] systemd[1]: local-fs.target: Job home-user-storage.mount/start deleted to break ordering cycle starting with local-fs.target/start
The same message is shown (with the other mount point) when the other bindfs fails. Manually mounting the filesystem works with no errors.