290

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?

Redirecting the output of a single command is easy, but I want something more like this:

#!/bin/sh
if [ ! -t 0 ]; then
    # redirect all of my output to a file here
fi

# rest of script...

Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.

I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.

Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:

# save stdout and stderr to file 
# descriptors 3 and 4, 
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1

# ...

# restore stdout and stderr
exec 1>&3 2>&4
5
  • 2
    Testing for $TERM is not the best way to test for interactive mode. Instead, test whether stdin is a tty (test -t 0). Commented Nov 24, 2008 at 16:31
  • 2
    In other words: if [ ! -t 0 ]; then exec >somefile 2>&1; fi Commented Nov 24, 2008 at 16:34
  • 1
    See here for all the goodness: http://tldp.org/LDP/abs/html/io-redirection.html Basically what was said by Joshua. exec > file redirects stdout to a specific file, exec < file replaces stdin by file, etc. Its the same as usual but using exec (see man exec for more details).
    – Loki
    Commented Nov 24, 2008 at 16:39
  • 2
    In your update section, you should also close the FDs 3 and 4, like so: exec 1>&3 2>&4 3>&- 4>&- Commented Oct 22, 2016 at 13:28
  • Permission denied on the first exec line.
    – Vince
    Commented May 3, 2021 at 4:53

6 Answers 6

269

Addressing the question as updated.

#...part of script without redirection...

{
    #...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...

#...residue of script without redirection...

The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)

You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.

The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.

13
  • 5
    This is much clearer than saving the original descriptors and restoring them later. Commented Apr 23, 2010 at 19:41
  • 44
    I had to do some googling to understand what this is really doing, so I wanted to share. The curly braces become a "block of code", which, in effect, creates an anonymous function. The output everything in the code block can then be redirected (See Example 3-2 from that link). Also note that curly braces do not launch a subshell, but similar I/O redirects can be done with subshells using parentheses.
    – chris
    Commented May 16, 2016 at 21:54
  • 4
    I like this solution better than the others. Even a person with only the most basic understanding of I/O redirection can understand what's happening. Plus, it's more verbose. And, as a Pythoner, I love verbose.
    – John Red
    Commented Nov 15, 2016 at 9:42
  • 1
    Better do >>. Some people have the habit of >. Appending is always safer and more recommended than overw***ing. Somebody wrote an application which uses the standard copy command to export some data to the same destination.
    – neverMind9
    Commented Dec 28, 2018 at 12:33
  • 7
    you could also use { //something } 2>&1 | tee outfile to show console messages and output to file at the same time Commented Sep 18, 2019 at 20:20
246

Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.

Send stdout to a file

exec > file

with stderr

exec > file                                                                      
exec 2>&1

append both stdout and stderr to file

exec >> file
exec 2>&1

As Jonathan Leffler mentioned in his comment:

exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.

15
  • 8
    I say also add 2>&1 to the end of that, just so stderr gets caught too. :-) Commented Nov 24, 2008 at 16:32
  • 9
    Where do you put these? At the top of the script?
    – colan
    Commented Jul 23, 2014 at 16:13
  • 5
    With this solution one has to also reset the redirect the script exits. The next answer by Jonathan Leffler is more "fail proof" in this sense.
    – Chuim
    Commented Aug 27, 2015 at 9:19
  • 9
    @JohnRed: exec has two separate jobs. One is to replace the current script with another command, using the same process — you specify the other command as an argument to exec (and you can tweak I/O redirections as you do it). The other job is changing the I/O redirections in the current shell script without replacing it. This notation is distinguished by not having a command as an argument to exec. The notation in this answer is of the "I/O only" variant — it only changes the redirection and does not replace the script that's running. (The set command is similarly multi-purpose.) Commented Nov 15, 2016 at 14:05
  • 33
    exec > >(tee -a "logs/logdata.log") 2>&1 prints the logs on the screen as well as writes them into a file
    – shriyog
    Commented Feb 2, 2017 at 9:20
46

You can make the whole script a function like this:

main_function() {
  do_things_here
}

then at the end of the script have this:

if [ -z $TERM ]; then
  # if not run via terminal, log everything into a log file
  main_function 2>&1 >> /var/log/my_uber_script.log
else
  # run via terminal, only output to screen
  main_function
fi

Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:

# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
2
  • 1
    Did you mean main_function >> /var/log/my_uber_script.log 2>&1 Commented Feb 7, 2012 at 5:02
  • I like using main_function in such pipe. But in this case your script does not return the original return value. In bash case you should exit then using 'exit ${PIPESTATUS[0]}'.
    – rudimeier
    Commented Feb 27, 2014 at 13:05
11

For saving the original stdout and stderr you can use:

exec [fd number]<&1 
exec [fd number]<&2

For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.

#!/bin/bash

exec 5<&1
exec 6<&2

exec 1> ~/a.txt 2>&1

echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
1
  • 3
    Normally, it would be better to use exec 5>&1 and exec 6>&2, using output redirection notation rather than input redirection notation for the outputs. You get away with it because when the script is run from a terminal, standard input is also writable and both standard output and standard error are readable by virtue (or is it 'vice'?) of a historical quirk: the terminal is opened for reading and writing and the same open file description is used for all three standard I/O file descriptors. Commented Nov 15, 2016 at 14:17
4
[ -t <&0 ] || exec >> test.log
2
  • 3
    What does this do?
    – djdomi
    Commented Sep 1, 2021 at 5:43
  • The [ -t ... ] test is a condition which checks whether standard input is a terminal. The syntax a ||b says to run a, and then b if a failed. Thus the exec only takes place if standard input is not a terminal; so, if the script is run interactively, print all output to the terminal, but if not, print to a file.
    – tripleee
    Commented Nov 14, 2022 at 5:51
1

I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!

I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].

#! /bin/sh -

main_function() {
 python command.py
}

main_function > >(tee -a "/var/www/logs/output.txt") 2>&1

if [ $? -eq 0 ]
then
    echo 'Success!'
else
    echo 'Failure!'
fi
1

Not the answer you're looking for? Browse other questions tagged or ask your own question.