5

I have a custom systemd service that automatically mounts some partitions under certains conditions. The service executes a shell script containing some mount commands. When I execute it directly, everything works fine and the partitions stay mounted even after the shell script returns. But when I start it with systemd, the partitions are mounted, but they get automatically unmounted after that the shell script returns. How can I keep them mounted?

My custom systemd unit:

[Unit]
Description=Automatically mount secondary volumes

[Service]
User=root
WorkingDirectory=/opt/vmount
ExecStart=/opt/vmount/vmount.sh
Restart=no

[Install]
WantedBy=multi-user.target
13
  • 1
    I am seeing this problem too, with a backup script running on a systemd timer. In my case I want the disk unmounted once the script stops, but if I call systemctl stop, the disk is unmounted before my backup script can handle the SIGTERM and do its own cleanup. So that is undesirable. Very interested if you ever solved it! Commented Jan 5, 2023 at 7:43
  • 1
    @tukan, my workflow is that I'm doing backups to a different disk for each weekday, and it's easier to do the logic of which disk to mount from within Python code than to try to figure out how to describe this in terms of systemd units. And it's just unexpected - if I didn't ask systemd itself to do the mounting, I didn't expect it to unmount unconditionally. I'm yet to find even a description of this as intended behaviour. Commented Jan 6, 2023 at 10:23
  • 2
    @ChrisBillington are you possibly mounting filesystem types that are implemented in userspace like FUSE does? e.g. ntfs-3g, sshfs, and many others?
    – LL3
    Commented Jan 6, 2023 at 14:19
  • 1
    @LL3 Yes, I am mounting an NTFS filesystem. Does that mean my backup script is spawning a subprocess for the mount or something, and systemd is sending that process SIGTERM at the same time as it does the main process? I do see output from ntfs-3g in the journal for my service upon the mount and unmount commands, which confuses me unless it's a subprocess. Commented Jan 7, 2023 at 21:51
  • 2
    @ChrisBillington Yes, it likely does mean that. Your script runs mount, which in turn spawns a fusermount process as per FUSE normal behavior. That process lives until an explicit unmount or until it gets killed some way. By default such process belongs to the same control-group assigned to your script by systemd. You can double-check that using systemd-cgls. Default systemd behavior for stopping a simple service such as the one in OP is to kill the entire control-group, see systemd.kill(5). There are ways for you to fix your use case, depending on your overall setup.
    – LL3
    Commented Jan 8, 2023 at 10:42

1 Answer 1

1

I don't know what is hidden in the sh/bash script. This is not the proper way to mount a volume via systemd.

The following way is the systemd's way to persistent mounting:

create: vim /etc/systemd/system/mnt-backup.mount

with the following contents:

[Unit]
Description=proper mounting with systemd

[Mount]
What=/dev/sdc1
Where=/mnt/backup
Type=ext4

[Install]
WantedBy=multi-user.target

To mount the volume simply start it: systemctl start mnt-backup.mount

Next check its systemd state: systemctl status mnt-backup.mount

And check if it was actually mounted with mount.

To make it persistent:

systemctl enable mnt-backup.mount

Note: All commands should be executed as root, if you use sudo instead you need to prefix all command in the shell with it.

2

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .