36

I am testing a systemd timer and trying to override its default timeout, but without success. I'm wondering whether there is a way to ask systemd to tell us when the service is going to be run next.

Normal file (/lib/systemd/system/snapbackend.timer):

# Documentation available at:
# https://www.freedesktop.org/software/systemd/man/systemd.timer.html

[Unit]
Description=Run the snapbackend service once every 5 minutes.

[Timer]
# You must have an OnBootSec (or OnStartupSec) otherwise it does not auto-start
OnBootSec=5min
OnUnitActiveSec=5min
# The default accuracy is 1 minute. I'm not too sure that either way
# will affect us. I am thinking that since our computers will be
# permanently running, it probably won't be that inaccurate anyway.
# See also:
# http://stackoverflow.com/questions/39176514/is-it-correct-that-systemd-timer-accuracysec-parameter-make-the-ticks-slip
#AccuracySec=1

[Install]
WantedBy=timers.target

# vim: syntax=dosini

The override file (/etc/systemd/system/snapbackend.timer.d/override.conf):

# This file was auto-generated by snapmanager.cgi
# Feel free to do additional modifications here as
# snapmanager.cgi will be aware of them as expected.
[Timer]
OnUnitActiveSec=30min

I ran the following commands and the timer still ticks once every 5 minutes. Could there be a bug in systemd?

sudo systemctl stop snapbackend.timer
sudo systemctl daemon-reload
sudo systemctl start snapbackend.timer

So I was also wondering, how can I know when the timer will tick next? Because that would immediately tell me whether it's in 5 min. or 30 min. but from the systemctl status snapbackend.timer says nothing about that. Just wondering whether there is a command that would tell me the delay currently used.

For those interested, there is the service file too (/lib/systemd/system/snapbackend.service), although I would imagine that this should have no effect on the timer ticks...

# Documentation available at:
# https://www.freedesktop.org/software/systemd/man/systemd.service.html

[Unit]
Description=Snap! Websites snapbackend CRON daemon
After=snapbase.service snapcommunicator.service snapfirewall.service snaplock.service snapdbproxy.service

[Service]
# See also the snapbackend.timer file
Type=simple
WorkingDirectory=~
ProtectHome=true
NoNewPrivileges=true
ExecStart=/usr/bin/snapbackend
ExecStop=/usr/bin/snapstop --timeout 300 $MAINPID
User=snapwebsites
Group=snapwebsites
# No auto-restart, we use the timer to start once in a while
# We also want to make systemd think that exit(1) is fine
SuccessExitStatus=1
Nice=5
LimitNPROC=1000
# For developers and administrators to get console output
#StandardOutput=tty
#StandardError=tty
#TTYPath=/dev/console
# Enter a size to get a core dump in case of a crash
#LimitCORE=10G

[Install]
WantedBy=multi-user.target

# vim: syntax=dosini
2

4 Answers 4

55

The state of currently active timers can be shown using systemctl list-timers:

$ systemctl list-timers --all
NEXT                         LEFT     LAST                         PASSED       UNIT                         ACTIVATES
Wed 2016-12-14 08:06:15 CET  21h left Tue 2016-12-13 08:06:15 CET  2h 18min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service

1 timers listed.
8

From @phg comment and answer, I found a page with the answer. The timers are cumulative and you need to reset them first otherwise the previous entry stays around. This is useful for calendars, but it works the same with all timers.

Having one entry which resets the timer before changing it to a new value works as expected:

# This file was auto-generated by snapmanager.cgi
# Feel free to do additional modifications here as
# snapmanager.cgi will be aware of them as expected.
[Timer]
OnUnitActiveSec=
OnUnitActiveSec=30min
3

No, there does not appear a way see exactly when a timer when will run next. systemd offers systemctl list-timers and systemctl status something.timer, but those don't show the affect of AccuracySec= and possibly other directives that shift the time.

If you set AccuracySec=1h on two servers, they will both report that the same timer on both servers will fire at the exact same time, but in fact they could start up to an hour apart! If you are interested to know if two randomized timers might collide, there appears to be no way to check the final calculated run time to find out.

There is a systemd issue open to make the output of list-timers more accurate / less confusing.

Additionally, there is RandomizedDelaySec option that will be combined with AccuracySec as per the man page.

3
  • Interesting point about the timers. The info we get from list-timers, though, is already pretty good to understand whether your usage of the timers is correct or not. Commented Jun 25, 2019 at 21:10
  • 1
    Not in my case. I would like to use the exact same configuration on twin hosts, but use AccuracySec= to ensure that both aren't doing maintence at the same time. I would like to see when the timers will actually fire on each host, but can't. Commented Jun 26, 2019 at 0:40
  • Ah. I have similar problems. I would use a selected master (using a vote system) and the master sends a message "do maintenance" to computer 1, once computer 1 is done, it reports its new status to the master which then asks computer 2 to do its maintenance, etc. One of those computers would of course be the master, but the code running the maintenance loop should be separate from the actual maintenance. One problem to keep in mind. If your cluster is to grow quite a bit remember that it takes time and it could be so long that some computers do not get updated for a long time! Commented Jun 26, 2019 at 1:14
2

To see the current state of $SYSTEMD_TIMER, run

systemctl list-timers $SYSTEMD_TIMER

# will give output simmilar to
# NEXT                        LEFT         LAST PASSED UNIT           ACTIVATES
# Sun 2022-01-09 21:33:00 CET 2h 9min left n/a  n/a    $SYSTEMD_TIMER $SYSTEMD_SERVICE

So for your particular question, the command is systemctl list-timers snapbackend.timer.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .