0

I have the following service unit to back up my project database:

[Unit]
Description=%i mysql backup for `apples` database
After=mysqld.service
Requires=mysqld.service

[Service]
User=root
EnvironmentFile=/home/site/backend/config/prod/env
ExecStart=bash -c "mysqldump apples | gzip > /home/site/backups/apples.%I.sql.gz && cp /home/site/backups/apples.%I.sql.gz /mnt/cloud/backup/db/apples/%I/apples_$(date +%%Y-%%m-%%d_%%H-%%M-%%S).sql.gz && touch /mnt/cloud/backup/db/apples/%I"
Restart=on-failure
StartLimitInterval=60
StartLimitBurst=3
Type=simple

[Install]
WantedBy=multi-user.target

It is used to run backups hourly, daily, weekly and monthly with timers (hence %I in the service).

Now, if I run the command from ExecStart manually (replacing %%s and %Is with % and e.g. monthly, of course), mtime of a directory containing the resulting backup is updated as expected.

If I one-shoot one of the timers with systemctl start [email protected], same happens - works fine.

BUT

when a backup file is created in a corresponding directory by the actual systemd timer that was scheduled to be called at particular time, the {hourly,daily,weekly,monthly} directory's mtime is not updated. You can see I even added touch invocation to make sure the mtime is updated, but it just doesn't for some reason. I have a bunch of daily backups in those directories with correct mtime of the backup files produced by scheduled timer invocations, but the directories containing them have old mtimes.

My timers look like this:

[Unit]
Description=%I mysql backup for database `apples`
Requires=mysqld.service
After=mysqld.service

[Timer]
OnCalendar=*-*-* 04:00:00

[Install]
WantedBy=timers.target

/mnt/cloud is mounted via WebDAV, if that matters.

I need those mtimes updated to set up monitoring for automated backups.

What could be the problem?

3
  • 1
    Two possible things that spring to mind are (1) %I is not set to the value you think it should be, (2) One of the earlier commands has failed (you've strung them together with && so if any fails then the remainder will not be run) Commented Jul 13, 2021 at 13:58
  • 1
    @roaima That's the first thing I thought of, too, but if any of that was the case then backups wouldn't get to the backup directory in the first place, instead they only don't affect parents' mtime. And I don't have any error records in journalctl | grep backup, only something like [email protected]: Succeeded.
    – jojman
    Commented Jul 13, 2021 at 14:04
  • 1
    At this point I'd be inclined to modify the ExecStart to call a script. First few lines of the script as follows (apologies for lack of formatting): #!/bin/bash - exec 1>>/tmp/execscript.log 2>&1 - set -x. Remember to make executable, of course. Then wait for the script to be triggered and look in /tmp/execscript.log. It should give you a trace of the commands executed and any errors generated. I prefer this approach to relying on the systemd journal when debugging Commented Jul 13, 2021 at 14:29

1 Answer 1

0

I don't know why, but simply moving the whole ExecStart command to a script file fixed the issue:

ExecStart=/home/site/scripts/backup.sh %I

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .