106

I know you can create a file descriptor and redirect output to it. e.g.

exec 3<> /tmp/foo # open fd 3.
echo a >&3 # write to it
exec 3>&- # close fd 3.

But you can do the same thing without the file descriptor:

FILE=/tmp/foo
echo a > "$FILE"

I'm looking for a good example of when you would have to use an additional file descriptor.

9 Answers 9

73

Most commands have a single input channel (standard input, file descriptor 0) and a single output channel (standard output, file descriptor 1) or else operate on several files which they open by themselves (so you pass them a file name). (That's in addition from standard error (fd 2), which usually filters up all the way to the user.) It is however sometimes convenient to have a command that acts as a filter from several sources or to several targets. For example, here's a simple script that separates the odd-numbered lines in a file from the even-numbered ones

while IFS= read -r line; do
  printf '%s\n' "$line"
  if IFS= read -r line; then printf '%s\n' "$line" >&3; fi
done >odd.txt 3>even.txt

Now suppose you want to apply a different filter to the odd-number lines and to the even-numbered lines (but not put them back together, that would be a different problem, not feasible from the shell in general). In the shell, you can only pipe a command's standard output to another command; to pipe another file descriptor, you need to redirect it to fd 1 first.

{ while … done | odd-filter >filtered-odd.txt; } 3>&1 | even-filter >filtered-even.txt

Another, simpler use case is filtering the error output of a command.

exec M>&N redirects a file descriptor to another one for the remainder of the script (or until another such command changes the file descriptors again). There is some overlap in functionality between exec M>&N and somecommand M>&N. The exec form is more powerful in that it does not have to be nested:

exec 8<&0 9>&1
exec >output12
command1
exec <input23
command2
exec >&9
command3
exec <&8

Other examples that may be of interest:

And for even more examples:

P.S. This is a surprising question coming from the author of the most upvoted post on the site that uses redirection through fd 3!

5
  • I'd rather say that "most commands have either single or double output channel - stdout (fd 1) and very often stderr (fd 2)". Commented Aug 17, 2011 at 15:15
  • Also, could you by the way explain why you use while IFS= read -r line;? The way I see it, IFS has no effect here since you assign value to only one variable (line). See this question. Commented Aug 17, 2011 at 15:33
  • @rozcietrzewiacz I've made a mention of stderr, and see the first part of my answer for why IFS makes a difference even if you're reading a into single variable (it's to retain the leading whitespace). Commented Aug 18, 2011 at 1:26
  • Couldn't you do the same with sed -ne 'w odd.txt' -e 'n;w even.txt'?
    – Wildcard
    Commented Nov 23, 2017 at 0:07
  • 3
    @Wildcard You could do the same with other tools, sure. But the goal of this answer was to illustrate redirections in the shell. Commented Nov 23, 2017 at 12:12
21

Here's an example of using extra FDs as bash script chattiness control:

#!/bin/bash

log() {
    echo $* >&3
}
info() {
    echo $* >&4
}
err() {
    echo $* >&2
}
debug() {
    echo $* >&5
}

VERBOSE=1

while [[ $# -gt 0 ]]; do
    ARG=$1
    shift
    case $ARG in
        "-vv")
            VERBOSE=3
        ;;
        "-v")
            VERBOSE=2
        ;;
        "-q")
            VERBOSE=0
        ;;
        # More flags
        *)
        echo -n
        # Linear args
        ;;
    esac
done

for i in 1 2 3; do
    fd=$(expr 2 + $i)
    if [[ $VERBOSE -ge $i ]]; then
        eval "exec $fd>&1"
    else
        eval "exec $fd> /dev/null"
    fi
done

err "This will _always_ show up."
log "This is normally displayed, but can be prevented with -q"
info "This will only show up if -v is passed"
debug "This will show up for -vv"
3
  • What's the purpose of using eval here? Commented Dec 1, 2021 at 4:46
  • exec $fd>&1 -> -bash: exec: 3: not found Basically, bash substitutions don't work for redirect parameters. You have to use eval to make them work properly.
    – Fordi
    Commented Dec 1, 2021 at 17:07
  • 3
    exec {fd}>&1 is a cleaner way to do that
    – kuilin
    Commented Dec 27, 2021 at 21:19
11

In the context of named pipes (fifos) the use of an additional file descriptor can enable non-blocking piping behaviour.

(
rm -f fifo
mkfifo fifo
exec 3<fifo   # open fifo for reading
trap "exit" 1 2 3 15
exec cat fifo | nl
) &
bpid=$!

(
exec 3>fifo  # open fifo for writing
trap "exit" 1 2 3 15
while true;
do
    echo "blah" > fifo
done
)
#kill -TERM $bpid

See: Named Pipe closing prematurely in script?

1
  • 2
    you dug up one of my old questions :) chad is right, you'll run into a race condition.
    – nopcorn
    Commented Aug 17, 2011 at 11:57
7

An extra file descriptor is good for when you want to catch the stdout in a variable yet still want to write out to the screen, for instance in a bash script user interface

arg1 string to echo 
arg2 flag 0,1 print or not print to 3rd fd stdout descriptor   
function ecko3 {  
if [ "$2" -eq 1 ]; then 
    exec 3>$(tty) 
    echo -en "$1" | tee >(cat - >&3)
    exec 3>&- 
else 
    echo -en "$1"  
fi 
}
1
  • 3
    I know this isn't a new answer, but I had to stare at this quite awhile to see what it does and thought it would be helpful if someone added an example of this function being used.This one echos and captures the whole output of a command - df, in this case. dl.dropboxusercontent.com/u/54584985/mytest_redirect
    – Joe
    Commented Oct 19, 2015 at 5:03
5

Here's yet another scenario when using an additional file descriptor seems appropriate (in Bash):

Shell script password security of command-line parameters

env -i bash --norc   # clean up environment
set +o history
read -s -p "Enter your password: " passwd
exec 3<<<"$passwd"
mycommand <&3  # cat /dev/stdin in mycommand
1

Example: using flock to force scripts to run serially with file locks

One example is to make use of file locking to force scripts to run serially system wide. This is useful if you don't want two scripts of the same kind to operate on the same files. Otherwise, the two scripts would interfere with each other and possibly corrupt data.

#exit if any command returns a non-zero exit code (like flock when it fails to lock)
set -e

#open file descriptor 3 for writing
exec 3> /tmp/file.lock

#create an exclusive lock on the file using file descriptor 3
#exit if lock could not be obtained
flock -n 3

#execute serial code

#remove the file while the lock is still obtained
rm -f /tmp/file.lock

#close the open file handle which releases the file lock and disk space
exec 3>&-

Use flock functionally by defining lock and unlock

You can also wrap this locking/unlocking logic into reusable functions. The following trap shell builtin will automatically release the file lock when the script exits (either error or success). trap helps to clean up your file locks. The path /tmp/file.lock should be a hard coded path so multiple scripts can try to lock on it.

# obtain a file lock and automatically unlock it when the script exits
function lock() {
  exec 3> /tmp/file.lock
  flock -n 3 && trap unlock EXIT
}

# release the file lock so another program can obtain the lock
function unlock() {
  # only delete if the file descriptor 3 is open
  if { >&3 ; } &> /dev/null; then
    rm -f /tmp/file.lock
  fi
  #close the file handle which releases the file lock
  exec 3>&-
}

The unlock logic above is to delete the file before the lock is released. This way it cleans up the lock file. Because the file was deleted, another instance of this program is able to obtain the file lock.

Usage of lock and unlock functions in scripts

You can use it in your scripts like the following example.

#exit if any command returns a non-zero exit code (like flock when it fails to lock)
set -e

#try to lock (else exit because of non-zero exit code)
lock

#system-wide serial locked code

unlock

#non-serial code

If you want your code to wait until it is able to lock you can adjust the script like:

set -e

#wait for lock to be successfully obtained
while ! lock 2> /dev/null; do
  sleep .1
done

#system-wide serial locked code

unlock

#non-serial code
1

As a concrete example, I just wrote a script which needs the timing information from a subcommand. Using an extra file descriptor allowed me to capture the time command's stderr without interrupting the subcommand's stdout or stderr.

(time ls -9 2>&3) 3>&2 2> time.txt

What this does is point ls's stderr to fd 3, point fd 3 to the script's stderr, and point time's stderr to a file. When the script is run, its stdout and stderr are the same as the subcommand's, which can be redirected as usual. Only time's output is redirected to the file.

$ echo '(time ls my-example-script.sh missing-file 2>&3) 3>&2 2> time.txt' > my-example-script.sh
$ chmod +x my-example-script.sh 
$ ./my-example-script.sh 
ls: missing-file: No such file or directory
my-example-script.sh
$ ./my-example-script.sh > /dev/null
ls: missing-file: No such file or directory
$ ./my-example-script.sh 2> /dev/null
my-example-script.sh
$ cat time.txt

real    0m0.002s
user    0m0.001s
sys 0m0.001s
1

Additional file descriptors can be used for creating temporary files in shell scripts.

This stackexchange answer (modified by this one) gives a neat solution for creating a temporary file in a shell script. The file exists only as long as the file descriptor is open, so the file is deleted even in the event of a program crash. By using separate file descriptors for reading and writing, the "read" file pointer will be at the beginning of the file even after the "write" file pointer has moved to the end of the file.

tmpfile=$(mktemp)
exec 3> "$tmpfile"
exec 4< "$tmpfile"
rm "$tmpfile"

echo "foo" >&3
cat <&4
0

File descriptors above 2 can be used for “parking” one of the standard file descriptors.  For example, in Suppress stderr messages in a bash script, the OP (fearless_fool) wants (as the question title suggests) to discard / suppress stderr messages in a shell script.  They realized that it is possible to do so for the entire script by invoking it as ./test1.sh 2> /dev/null

But you would have to remember to invoke it that way every time.  And, sure, you could write an alias, a shell function or a wrapper script to do it, but these approaches may not be 100% reliable.  fearless_fool asks whether there is a way to do the equivalent I/O redirection from within the test1.sh script.

UVV pointed out that this can be done from within the script with the command exec 2> /dev/null.  If you want to suppress the stderr for the entire script, this is all you need to do.

But Scott’s answer raises the possibility that there might be a requirement to suppress the stderr for just part of the script, and then revert to normal.  This answer (after some beating around the bush) suggests

exec 3>&2
exec 2> /dev/null
(do stuff where you don't want to see the stderr.)
exec 2>&3

which saves the original stderr in file descriptor 3, and later restores it.

kenorb uses a similar trick in his answer to the similarly-themed Suppress execution trace for echo command?  Rather than

command1 2> /dev/null
command2
command3 2> /dev/null
command4
command5 2> /dev/null
you can do
exec 3> /dev/null
command1 2>&3
command2
command3 2>&3
command4
command5 2>&3
It doesn’t gain you much functionality, but it lets you make your code a little bit more visually compact.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .