I have a custom systemd service that automatically mounts some partitions under certains conditions. The service executes a shell script containing some mount commands. When I execute it directly, everything works fine and the partitions stay mounted even after the shell script returns. But when I start it with systemd, the partitions are mounted, but they get automatically unmounted after that the shell script returns. How can I keep them mounted?
My custom systemd unit:
[Unit]
Description=Automatically mount secondary volumes
[Service]
User=root
WorkingDirectory=/opt/vmount
ExecStart=/opt/vmount/vmount.sh
Restart=no
[Install]
WantedBy=multi-user.target
systemctl stop
, the disk is unmounted before my backup script can handle the SIGTERM and do its own cleanup. So that is undesirable. Very interested if you ever solved it!mount
, which in turn spawns afusermount
process as per FUSE normal behavior. That process lives until an explicitunmount
or until it gets killed some way. By default such process belongs to the same control-group assigned to your script by systemd. You can double-check that usingsystemd-cgls
. Default systemd behavior for stopping asimple
service such as the one in OP is to kill the entire control-group, seesystemd.kill(5)
. There are ways for you to fix your use case, depending on your overall setup.