43

A solution that does not require additional tools would be prefered.

4
  • 1
    What about a lock file?
    – Marco
    Commented Sep 18, 2012 at 11:07
  • @Marco I found this SO answer using that, but as stated in a comment, this can create a race condition Commented Sep 18, 2012 at 11:18
  • 3
    This is BashFAQ 45.
    – jw013
    Commented Sep 18, 2012 at 13:49
  • @jw013 thanks! So maybe something like ln -s my.pid .lock will claim the lock (followed by echo $$ > my.pid) and on failure can check whether the PID stored in .lock is really an active instance of the script Commented Sep 18, 2012 at 15:22

15 Answers 15

26

Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.

if mkdir -- "$LOCKDIR"
then
    # Do important, exclusive stuff
    if rmdir -- "$LOCKDIR"
    then
        echo "Victory is mine"
    else
        echo "Could not remove lock dir" >&2
    fi
else
    # Handle error condition
    ...
fi

You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.

4
  • 1
    I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim that mkdir is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true) Commented Sep 18, 2012 at 13:08
  • Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
    – user732
    Commented Sep 18, 2012 at 13:37
  • 1
    Ok, as Ihunath stated, the lockdir would most likely be in /tmp which is usually not NFS shared, so that should be fine. Commented Sep 19, 2012 at 8:33
  • 1
    I would use rm -rf to remove the lock directory. rmdir will fail if someone (not necessarily you) managed to add a file to the directory.
    – chepner
    Commented Sep 22, 2012 at 4:32
24

To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:

#Remove the lock directory
cleanup() {
    if rmdir -- "$LOCKDIR"; then
        echo "Finished"
    else
        echo >&2 "Failed to remove lock directory '$LOCKDIR'"
        exit 1
    fi
}

if mkdir -- "$LOCKDIR"; then
    #Ensure that if we "grabbed a lock", we release it
    #Works for SIGTERM and SIGINT(Ctrl-C) as well in some shells
    #including bash.
    trap "cleanup" EXIT

    echo "Acquired lock, running"

    # Processing starts here
else
    echo >&2 "Could not create lock directory '$LOCKDIR'"
    exit 1
fi
2
  • Alternatively, if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement.
    – Kusalananda
    Commented Feb 22, 2018 at 13:19
  • 1
    It's worth pointing out that the trap definition must remain at the global scope of the script. Moving that mkdir block inside a function will result in cleanup: command not found. (I learned this the hard way) Commented Dec 2, 2020 at 20:13
17

One other way to make sure a single instance of bash script runs:

#! /bin/bash -

# Check if another instance of script is running
if pidof -o %PPID -x -- "$0" >/dev/null; then
  printf >&2 '%s\n' "ERROR: Script $0 already running"
  exit 1
fi

...

pidof -o %PPID -x -- "$0" gets the PID of the existing script¹ if it's already running or exits with error code 1 if no other script is running


¹ Well, any process with the same name...

5
  • I prefer the simplicity of this solution. “Simplicity is the ultimate sophistication.” -- Leonardo da Vinci Commented Nov 22, 2020 at 23:34
  • That doesn't work if the script is run as ./thatscript the first time and /path/to/thatscript the second time. It's generally a bad idea to rely on process names as those can be arbitrarily set to any value by anyone. Commented Jul 16, 2022 at 8:52
  • 1
    @StéphaneChazelas Regarding process name changing - noted! Could be avoided by using basename $0 instead of $0. Still can never be safe, as you mentioned, but at least that function can be called from different paths.
    – lonix
    Commented Jul 16, 2022 at 9:10
  • @lonix rather "$(basename -- "$0")" (assuming $0 doesn't end in newline characters) or "${0##*/}" (or $0:t in zsh). Remember expansions must be quoted in sh/bash. Commented Jul 16, 2022 at 9:13
  • I'm not sure whether this is atomic or not... If two instances are run simultaneously, the two of them could execute, or none of them. This could potentially be a race condition.
    – ShellCode
    Commented Feb 26, 2023 at 22:09
11

Although you've asked for a solution without additional tools, this is my favourite way using flock:

#!/bin/sh

[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :

echo "servus!"
sleep 10

This comes from the examples section of man flock, which further explains:

This is useful boilerplate code for shell scripts. Put it at the top of the shell script you want to lock and it'll automatically lock itself on the first run. If the env var $FLOCKER is not set to the shell script that is being run, then execute flock and grab an exclusive non-blocking lock (using the script itself as the lock file) before re-execing itself with the right arguments. It also sets the FLOCKER env var to the right value so it doesn't run again.

Points to consider:

Update: If your script may get called through different paths (e.g. through its absolute or relative path) or in other words if $0 differs in parallel invocations then the above doesn't work properly. Use a unique environment variable (FLOCK_HAFBX in the example) instead:

[ -z "$FLOCK_HAFBX" ] && exec env FLOCK_HAFBX=1 flock -en "$0" "$0" "$@" || :

The environment variable should be unique so nested flocked scripts work as expected.

1
  • 1
    This is, by far, the most elegant. Thank you.
    – h q
    Commented Mar 28, 2021 at 11:36
7

This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps enough?

#!/bin/bash 

me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;

# do stuff below this comment
7
  • 1
    Nice and/or brilliant. :)
    – Spooky
    Commented Mar 3, 2017 at 16:49
  • 4
    I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden by grep -v $$. real examples: old - 14532, new - 1453, old - 28858, new - 858.
    – Naktibalda
    Commented Feb 22, 2018 at 11:30
  • 2
    I fixed it by changing grep -v $$ to grep -v "^${$} "
    – Naktibalda
    Commented Feb 22, 2018 at 11:52
  • 1
    @Naktibalda good catch, thanks! You could also fix it with grep -wv "^$$" (see edit).
    – terdon
    Commented Feb 22, 2018 at 12:38
  • 5
    With this solution, if two instances of the same script are started at the same time, there's a chance that they will "see" each others and both will terminate. It may not be a problem, but it also may be, just be aware of it.
    – flagg19
    Commented Jan 25, 2020 at 11:19
5

This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock to handle the lock.

script=`realpath $0`     # get absolute path to the script itself
exec 6< "$script"        # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; }   # lock file descriptor 6 OR show error message if script is already running

echo "Run your single instance code here"

The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.

4
  • 1
    You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing.  So you should be doing exec 6< "$SCRIPT". Commented Nov 2, 2018 at 6:01
  • @Scott I've changed the code according your suggestions. Many thanks.
    – John Doe
    Commented Nov 2, 2018 at 6:38
  • I suggest using lower-case variable names here, e.g. script=.... It reduces the risk of colliding with built-in shell variables such as $PATH. Which I've never done... (cough) Commented Jun 28, 2020 at 20:05
  • @EdwardTeach Good point. I've changed it
    – John Doe
    Commented Jun 29, 2020 at 5:50
4

I would use a lock file, as mentioned by Marco

#!/bin/bash

# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit

# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit

# Do stuff
sleep 60

# Remove lock file
rm /tmp/lock.file
10
  • 1
    (I think you forgot to create the lock file) What about race conditions? Commented Sep 18, 2012 at 11:28
  • ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
    – nsg
    Commented Sep 18, 2012 at 11:32
  • They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe using lsof $0 isn't bad, either? Commented Sep 18, 2012 at 11:34
  • You can diminish the race condition by writing your $$ in the lock file. Then sleep for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.
    – manatwork
    Commented Sep 18, 2012 at 11:41
  • 1
    I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
    – nsg
    Commented Sep 18, 2012 at 11:45
3

If you want to make sure that only one instance of your script is running take a look at:

Lock your script (against parallel run)

Otherwise you can check ps or invoke lsof <full-path-of-your-script>, since i wouldn't call them additional tools.


Supplement:

actually i thought of doing it like this:

for LINE in `lsof -c <your_script> -F p`; do 
    if [ $$ -gt ${LINE#?} ] ; then
        echo "'$0' is already running" 1>&2
        exit 1;
    fi
done

this ensures that only the process with the lowest pid keeps on running even if you fork-and-exec several instances of <your_script> simultaneously.

2
  • 1
    Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like [[(lsof $0 | wc -l) > 2]] && exit might actually be enough, or is this also prone to race conditions? Commented Sep 18, 2012 at 11:30
  • You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer. Commented Sep 18, 2012 at 12:52
1

I am using cksum to check my script is truly running single instance, even I change filename & file path.

I am not using trap & lock file, because if my server suddenly down, I need to remove manually lock file after server goes up.

Note: #!/bin/bash in first line is required for grep ps

#!/bin/bash

checkinstance(){
   nprog=0
   mysum=$(cksum $0|awk '{print $1}')
   for i in `ps -ef |grep /bin/bash|awk '{print $2}'`;do 
        proc=$(ls -lha /proc/$i/exe 2> /dev/null|grep bash) 
        if [[ $? -eq 0 ]];then 
           cmd=$(strings /proc/$i/cmdline|grep -v bash)
                if [[ $? -eq 0 ]];then 
                   fsum=$(cksum /proc/$i/cwd/$cmd|awk '{print $1}')
                   if [[ $mysum -eq $fsum ]];then
                        nprog=$(($nprog+1))
                   fi
                fi
        fi
   done

   if [[ $nprog -gt 1 ]];then
        echo $0 is already running.
        exit
   fi
}

checkinstance 

#--- run your script bellow 

echo pass
while true;do sleep 1000;done

Or you can hardcoded cksum inside your script, so you no worry again if you want to change filename, path, or content of your script.

#!/bin/bash

mysum=1174212411

checkinstance(){
   nprog=0
   for i in `ps -ef |grep /bin/bash|awk '{print $2}'`;do 
        proc=$(ls -lha /proc/$i/exe 2> /dev/null|grep bash) 
        if [[ $? -eq 0 ]];then 
           cmd=$(strings /proc/$i/cmdline|grep -v bash)
                if [[ $? -eq 0 ]];then 
                   fsum=$(grep mysum /proc/$i/cwd/$cmd|head -1|awk -F= '{print $2}')
                   if [[ $mysum -eq $fsum ]];then
                        nprog=$(($nprog+1))
                   fi
                fi
        fi
   done

   if [[ $nprog -gt 1 ]];then
        echo $0 is already running.
        exit
   fi
}

checkinstance

#--- run your script bellow

echo pass
while true;do sleep 1000;done
4
  • 1
    Please explain exactly how hardcoding the checksum is a good idea. Commented May 24, 2019 at 0:14
  • not hardcoding checksum, its only create identity key of your script, when another instance will running, it will check other shell script process and cat the file first, if your identity key is on that file, so its mean your instance already running.
    – arputra
    Commented May 24, 2019 at 6:35
  • OK; please edit your answer to explain that.  And, in the future, please don’t post multiple 30-line long blocks of code that look like they’re (almost) identical without saying and explaining how they’re different.  And don’t say things like “you can hardcoded [sic] cksum inside your script”, and don’t continue to use variable names mysum and fsum, when you’re not talking about a checksum any more. Commented May 24, 2019 at 7:08
  • Looks interesting, thanks! And welcome to unix.stackexchange :) Commented May 24, 2019 at 8:08
1

This handy package does what you're looking for.

https://github.com/krezreb/singleton

once installed, just prefix your command with singleton LOCKNAME

e.g. singleton LOCKNAME PROGRAM ARGS...

1

Another approach not mentioned here that does not use flock is to rely on the fact that creating a link is atomic.

Given this script:

#!/bin/bash

touch /var/tmp/singleton.$$.lock
if ( link /var/tmp/singleton.$$.lock /var/tmp/singleton.lock 2>/dev/null ) ; then
  echo 'Lock Acquired By: ' $$
  sleep 2
  echo 'Lock Released By: ' $$
  rm /var/tmp/singleton.lock /var/tmp/singleton.$$.lock
else
  echo 'Failed Lock Acquistion Attempt By: ' $$
  rm /var/tmp/singleton.$$.lock
fi

When you try to run multiple instances simultaneously, only one of them acquires the "lock":

§ for i in {1..3}; do (./singleton.sh &) ; done ; sleep 3
Lock Acquired By:  3816
Failed Lock Acquistion Attempt By:  3820
Failed Lock Acquistion Attempt By:  3818
Lock Released By:  3816

You can modify this script to add the appropriate trap statements to make it robust in the event that your script does not exit gracefully.

The basic idea is that when two instances of the script run, they try to create a link at the same path (i.e., /var/tmp/singleton.lock) that points to something, but only one of them will succeed and the other one will get the error code for the error:

ln: failed to create hard link '/var/tmp/singleton.lock': File exists

In this script, the something happens to be an empty file that has the PID of the executing script. You could use other schemes for what something could be but the important bit is that the two instances of the script try to create a link at the same path.

1
0

You can use this: https://github.com/sayanarijit/pidlock

sudo pip install -U pidlock

pidlock -n sleepy_script -c 'sleep 10'
1
  • 1
    > A solution that does not require additional tools would be prefered.
    – dhag
    Commented Jan 12, 2018 at 19:13
0

My code to you

#!/bin/bash

script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}

function executing {
  echo "'${script_file}' already executing"
  exit 1
}

(
  flock -n 9 || executing

  sleep 10

) 9> /var/lock/${lock_file}

Based on man flock, improving only:

  • the name of the lock file, to be based on the full name of the script
  • the message executing

Where I put here the sleep 10, you can put all the main script.

0

Easiest one liner script rather than writing complex PID or lock statements

flock -xn LOCKFILE.lck -c SCRIPT.SH

where x denotes exclusive lock and n denotes non-blocking which fails rather than wait for the lock to be released. c runs the script that you want to launch.

0

I did it like this:

#!/bin/sh

GET_BASENAME=$( basename $0 )
GET_PIDS=$( pgrep -f $GET_BASENAME )
GET_MY_PATH=$( realpath "$0" )
for X1 in $( seq 1 1 $( echo -n "$GET_PIDS" | wc -l ) ); do
    GET_PID=$( echo -n "$GET_PIDS" | head -n "$X1" | tail -n 1 )
    GET_PID_PATH=$( readlink "/proc/$GET_PID/fd/10" )
    
    if [ "$GET_MY_PATH" = "$GET_PID_PATH" ]; then
        echo "The service is already running in process $GET_PID!"
        exit
    fi
done

# My example script
while true; do
    echo -n "."
    sleep 1
done
1
  • This looks to be more error-prone than the other answers provided (I personally use something similar to procedures involving flock) and it also assumes that you have write access to /run/. The /run/ access can be worked around but even assuming the existence of a unique and universally accessible directory the method does not always work. Unique & accessible is not workable for packages under systemd with service-defined tmp directories.
    – doneal24
    Commented Feb 17 at 20:23

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .