Running Debian 5.10 within QEMU, I'm using the 9pfs file system to access files stored on my host system, as Debian comes with built-in support and QEMU supports that file system natively.
When booted, I can simply mount the system using
sudo mount -t 9p -o trans=virtio,msize=131072 hostfs /mnt/host
Works as expected. For those not familiar with how this works: You need to tell QEMU what hostfs
actually means, e.g. by adding an option like
-virtfs local,path="$HOME/Documents",mount_tag=hostfs,security_model=mapped-file
That way my Documents folder is exposed as hostfs
and once mounted, fully available in the Debian guest system.
As I always want to have access to the host system, I modified /etc/fstab
and added the following line:
hostfs /mnt/host 9p defaults,trans=virtio,msize=131072 0 0
And when I then run sudo mount -a
, this also works as expected.
However, when I reboot the Debian guest system, the boot fails when systemd
tries to mount that entry. It dynamically generates a mnt-host.mount
and when trying to mount it, I get
mount: /mnt/host: bad option
mnt-host.mount: Mount process existed, code=exited, status=32/n/a
and this brings the boot process to a halt. I have to hit CTRL+D to contiune booting.
The crazy thing is, once the boot has finished, /mnt/host
is mounted!
That's because apparently systemd
is trying to mount it again. Later on in the log, there is another mnt-host.mount
job executed and this time is just succeeds as expected.
I'm a bit lost here as to why systemd
tries to mount that entry twice and why it never succeeds on the first attempt, despite the same options being used twice.
x-systemd.automount
(as an option) to the fstab entry.