287

There are some commands which filter or act on input, and then pass it along as output, I think usually to stdout - but some commands will just take the stdin and do whatever they do with it, and output nothing.

I'm most familiar with OS X and so there are two that come to mind immediately are pbcopy and pbpaste- which are means of accessing the system clipboard.

Anyhow, I know that if I want to take stdout and spit the output to go to both stdout and a file then I can use the tee command. And I know a little about xargs, but I don't think that's what I'm looking for.

I want to know how I can split stdout to go between two (or more) commands. For example:

cat file.txt | stdout-split -c1 pbcopy -c2 grep -i errors

There is probably a better example than that one, but I really am interested in knowing how I can send stdout to a command that does not relay it and while keeping stdout from being "muted" - I'm not asking about how to cat a file and grep part of it and copy it to the clipboard - the specific commands are not that important.

Also - I'm not asking how to send this to a file and stdout - this may be a "duplicate" question (sorry) but I did some looking and could only find similar ones that were asking about how to split between stdout and a file - and the answers to those questions seemed to be tee, which I don't think will work for me.

Finally, you may ask "why not just make pbcopy the last thing in the pipe chain?" and my response is 1) what if I want to use it and still see the output in the console? 2) what if I want to use two commands which do not output stdout after they process the input?

Oh, and one more thing - I realize I could use tee and a named pipe (mkfifo) but I was hoping for a way this could be done inline, concisely, without a prior setup :)

1

10 Answers 10

344

You can use tee and process substitution for this:

cat file.txt | tee >(pbcopy) | grep errors

This will send all the output of cat file.txt to pbcopy, and you'll only get the result of grep on your console.

You can put multiple processes in the tee part:

cat file.txt | tee >(pbcopy) >(do_stuff) >(do_more_stuff) | grep errors
10
  • 39
    Not a concern with pbcopy, but worth mentioning in general: whatever the process substitution outputs is also seen by the next pipe segment, after the original input; e.g.: seq 3 | tee >(cat -n) | cat -e (cat -n numbers the input lines, cat -e marks newlines with $; you'll see that cat -e is applied to both the original input (first) and (then) the output from cat -n). Output from multiple process substitutions will arrive in non-deterministic order.
    – mklement0
    Commented Dec 9, 2014 at 4:47
  • 74
    The >( only works in bash. If you try that using for instance sh it won't work. It's important to make this notice.
    – AAlvz
    Commented Dec 16, 2014 at 16:31
  • 13
    @AAlvz: Good point: process substitution is not a POSIX feature; dash, which act as sh on Ubuntu, doesn't support it, and even Bash itself deactivates the feature when invoked as sh or when set -o posix is in effect. However, it's not just Bash that supports process substitutions: ksh and zsh support them too (not sure about others).
    – mklement0
    Commented Apr 15, 2015 at 2:59
  • 3
    @mklement0 that doesn't appear to be true. On zsh (Ubuntu 14.04) your line prints: 1 1 2 2 3 3 1$ 2$ 3$ Which is sad, because I really wanted the functionality to be as you say it.
    – Aktau
    Commented Oct 21, 2016 at 9:56
  • 2
    @Aktau: Indeed, my sample command only work as described in bash and ksh - zsh apparently doesn't send output from output process substitutions through the pipeline (arguably, that's preferable, because it doesn't pollute what is sent to the next pipeline segment - though it still prints). In all shells mentioned, however, it's generally not a good idea to have a single pipeline in which regular stdout output and output from process substitutions is mixed - the output ordering will not be predictable, in a manner that may only surface infrequently or with large output data sets.
    – mklement0
    Commented Oct 22, 2016 at 5:39
184

You can specify multiple file names to tee, and in addition the standard output can be piped into one command. To dispatch the output to multiple commands, you need to create multiple pipes and specify each of them as one output of tee. There are several ways to do this.

Process substitution

If your shell is ksh93, bash or zsh, you can use process substitution. This is a way to pass a pipe to a command that expects a file name. The shell creates the pipe and passes a file name like /dev/fd/3 to the command. The number is the file descriptor that the pipe is connected to. Some unix variants do not support /dev/fd; on these, a named pipe is used instead (see below).

tee >(command1) >(command2) | command3

File descriptors

In any POSIX shell, you can use multiple file descriptors explicitly. This requires a unix variant that supports /dev/fd, since all but one of the outputs of tee must be specified by name.

{ { { tee /dev/fd/3 /dev/fd/4 | command1 >&9;
    } 3>&1 | command2 >&9;
  } 4>&1 | command3 >&9;
} 9>&1

Named pipes

The most basic and portable method is to use named pipes. The downside is that you need to find a writable directory, create the pipes, and clean up afterwards.

tmp_dir=$(mktemp -d)
mkfifo "$tmp_dir/f1" "$tmp_dir/f2"
command1 <"$tmp_dir/f1" & pid1=$!
command2 <"$tmp_dir/f2" & pid2=$!
tee "$tmp_dir/f1" "$tmp_dir/f2" | command3
rm -rf "$tmp_dir"
wait $pid1 $pid2
6
  • 15
    Thanks so much for providing the two alternative versions for those who don't want to rely on bash or a certain ksh.
    – trr
    Commented Jul 2, 2013 at 4:08
  • tee "$tmp_dir/f1" "$tmp_dir/f2" | command3 should surely be command3 | tee "$tmp_dir/f1" "$tmp_dir/f2", as you want stdout of command3 piped to tee, no? I tested your version under dash and tee blocks indefinitely waiting for input, but switching the order produced the expected result. Commented Apr 10, 2018 at 19:21
  • 3
    @AdrianGünter No. All three examples read data from standard input and send it to each of command, command2 and command3. Commented Apr 10, 2018 at 22:04
  • @Gilles I see, I misinterpreted the intent and tried to use the snippet incorrectly. Thanks for the clarification! Commented Apr 10, 2018 at 23:18
  • If you have no control on the shell used, but you can use bash explicitly, you can do <command> | bash -c 'tee >(command1) >(command2) | command3'. It helped in my case.
    – gc5
    Commented Oct 13, 2018 at 18:09
27

Just play with process substitution.

mycommand_exec |tee >(grep ook > ook.txt) >(grep eek > eek.txt)

grep are two binaries which have the same output from mycommand_exec as their process specific input.

1
  • Thanks, this was a pretty straightforward response for how to split a pipe to two processes! However it should be noted that the output of mycommand_exec will still be passed UNFILTERED to stdout!
    – Reu
    Commented Mar 31, 2022 at 17:15
22

If you are using zsh then you can take advantage of the power of MULTIOS feature, i.e. get rid of tee command completely:

uname >file1 >file2

will just write the output of uname to two different files: file1 and file2, what is equivalent of uname | tee file1 >file2

Similarly redirection of standard inputs

wc -l <file1 <file2

is equivalent of cat file1 file2 | wc -l (please note that this is not the same as wc -l file1 file2, the later counts number of lines in each file separately).

Of course you can also use MULTIOS to redirect output not to files but to other processes, using process substitution, e.g.:

echo abc > >(grep -o a) > >(tr b x) > >(sed 's/c/y/')
2
  • 6
    Good to know. MULTIOS is an option that is ON by default (and can be turned off with unsetopt MULTIOS).
    – mklement0
    Commented Apr 15, 2015 at 3:39
  • When using tee under MacOS 11.5 zsh I kept getting error "zsh: missing delimiter for 'g' glob qualifier" but this resolved it. Info here. Commented Jul 25, 2021 at 17:38
13

There is also pee from the moreutils package. It is designed for it:

pee 'command1' 'command2' 'cat -'
2
  • Should be best answer!! fortune | pee cowsay espeak
    – zzapper
    Commented Dec 2, 2021 at 15:32
  • pee 'command1' 'command2' 'cat -' doesn't make sense, because pee expects the input on its own stdin, right? Instead, and to reuse the question's example: cat file.txt | pee 'pbcopy' 'grep -i errors'
    – Abdull
    Commented Aug 25, 2023 at 20:33
6

Capture the command STDOUT to a variable and re-use it as many times as you like:

commandoutput="$(command-to-run)"
echo "$commandoutput" | grep -i errors
echo "$commandoutput" | pbcopy

If you need to capture STDERR too, then use 2>&1 at the end of the command, like so:

commandoutput="$(command-to-run 2>&1)"
5
  • 4
    Where are variables stored? If you were dealing with a large file or something of that sort, wouldn't this hog up a lot of memory? Are variables limited in size?
    – cwd
    Commented Jan 7, 2012 at 4:09
  • 1
    what if $commandoutput is huge?, its better to use pipes and process substitution. Commented Jan 7, 2012 at 13:00
  • 4
    Obviously this solution is possible only when you know the size of the output will easily fit in memory, and you're OK with buffering the entire output before running the next commands on it. Pipes solve these two problems by allowing arbitrary length data and streaming it in real time to the receiver as it's generated.
    – trr
    Commented Jun 23, 2013 at 12:15
  • 2
    This is a good solution if you have small output, and you know that the output will be text and not binary. (shell variables often aren't binary safe)
    – Rucent88
    Commented Jul 20, 2014 at 4:48
  • 1
    I can't get this to work with binary data. I think it's something with echo trying to interpret null bytes or some other noncharacter data.
    – Rolf
    Commented May 21, 2017 at 16:13
6

For a reasonably small output produced by a command, we can redirect the output to temporary file, and send those temporary file to commands in loop. This can be useful when order of executed commands might matter.

The following script , for example, could do that:

#!/bin/sh

temp=$( mktemp )
cat /dev/stdin > "$temp"

for arg
do
    eval "$arg" < "$temp"
done
rm "$temp"

Test run on Ubuntu 16.04 with /bin/sh as dash shell:

$ cat /etc/passwd | ./multiple_pipes.sh  'wc -l'  'grep "root"'                                                          
48
root:x:0:0:root:/root:/bin/bash
2
  • If you have write access to a directory, it would be preferable to simply use a named pipe. Commented May 14, 2021 at 16:50
  • It would perhaps be a good idea to avoid using /etc/passwd for testing commands on the CLI.
    – Mausy5043
    Commented Jul 5, 2023 at 4:31
3

Another take on this:

$ cat file.txt | tee >(head -1 1>&2) | grep foo

Works by redirecting tee's file argument to bash's process substitution, where this process is head which prints only one line (header), and redirects it's own output to stderr (in order it to be visible).

2

This may be of use: http://www.spinellis.gr/sw/dgsh/ (directed graph shell) Seems like a bash replacement supporting an easier syntax for "multipipe" commands.

1

Here's a quick-and-dirty partial solution, compatible with any shell including busybox.

The more narrow problem it solves is: print the complete stdout to one console, and filter it on another one, without temporary files or named pipes.

  • Start another session to the same host. To find out its TTY name, type tty. Let's assume /dev/pty/2.
  • In the first session, run the_program | tee /dev/pty/2 | grep ImportantLog:

You get one complete log, and a filtered one.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .