87

Which is more efficient for finding which files in an entire filesystem contain a string: recursive grep or find with grep in an exec statement? I assume find would be more efficient because you can at least do some filtering if you know the file extension or a regex that matches the file name, but when you only know -type f which is better? GNU grep 2.6.3; find (GNU findutils) 4.4.2

Example:

grep -r -i 'the brown dog' /

find / -type f -exec grep -i 'the brown dog' {} \;

6
  • 1
    Math/computer science/ algorithm efficiency ins't opinion based. Commented May 22, 2014 at 15:08
  • Check this one. Though not recursive, it would give an understanding on which is better. unix.stackexchange.com/questions/47983/…
    – Ramesh
    Commented May 22, 2014 at 15:13
  • 9
    @AvinashRaj he's not asking for opinion. He''s asking which is more efficient and/or faster, not which one is "better". This is a perfectly answerable question which has a single, specific answer that depends on how these two programs do their job and on what exactly you give them to search through.
    – terdon
    Commented May 22, 2014 at 15:18
  • 2
    Note that the -exec {} + form will do fewer forks, so should be faster than -exec {} \;. You may need to add -H (or -h) to the grep options to get exactly equivalent output.
    – Mikel
    Commented May 22, 2014 at 16:13
  • You probably didn't want the -r option on grep for the second one
    – qwertzguy
    Commented Jun 11, 2015 at 18:03

4 Answers 4

104

I'm not sure:

grep -r -i 'the brown dog' /*

is really what you meant. That would mean grep recursively in all the non-hidden files and dirs in / (but still look inside hidden files and dirs inside those).

Assuming you meant:

grep -r -i 'the brown dog' /

A few things to note:

  • Not all grep implementations support -r. And among those that do, the behaviours differ: some follow symlinks to directories when traversing the directory tree (which means you may end up looking several times in the same file or even run in infinite loops), some will not. Some will look inside device files (and it will take quite some time in /dev/zero for instance) or pipes or binary files..., some will not.
  • It's efficient as grep starts looking inside files as soon as it discovers them. But while it looks in a file, it's no longer looking for more files to search in (which is probably just as well in most cases)

Your:

find / -type f -exec grep -i 'the brown dog' {} \;

(removed the -r which didn't make sense here) is terribly inefficient because you're running one grep per file. ; should only be used for commands that accept only one argument. Moreover here, because grep looks only in one file, it will not print the file name, so you won't know where the matches are.

You're not looking inside device files, pipes, symlinks..., you're not following symlinks, but you're still potentially looking inside things like /proc/mem.

find / -type f -exec grep -i 'the brown dog' {} +

would be a lot better because as few grep commands as possible would be run. You'd get the file name unless the last run has only one file. For that it's better to use:

find / -type f -exec grep -i 'the brown dog' /dev/null {} +

or with GNU grep:

find / -type f -exec grep -Hi 'the brown dog' {} +

Note that grep will not be started until find has found enough files for it to chew on, so there will be some initial delay. And find will not carry on searching for more files until the previous grep has returned. Allocating and passing the big file list has some (probably negligible) impact, so all in all it's probably going to be less efficient than a grep -r that doesn't follow symlink or look inside devices.

With GNU tools:

find / -type f -print0 | xargs -r0 grep -Hi 'the brown dog'

As above, as few grep instances as possible will be run, but find will carry on looking for more files while the first grep invocation is looking inside the first batch. That may or may not be an advantage though. For instance, with data stored on rotational hard drives, find and grep accessing data stored at different locations on the disk will slow down the disk throughput by causing the disk head to move constantly. In a RAID setup (where find and grep may access different disks) or on SSDs, that might make a positive difference.

In a RAID setup, running several concurrent grep invocations might also improve things. Still with GNU tools on RAID1 storage with 3 disks,

find / -type f -print0 | xargs -r0 -P2 grep -Hi 'the brown dog'

might increase the performance significantly. Note however that the second grep will only be started once enough files have been found to fill up the first grep command. You can add a -n option to xargs for that to happen sooner (and pass fewer files per grep invocation).

Also note that if you're redirecting xargs output to anything but a terminal device, then the grepss will start buffering their output which means that the output of those greps will probably be incorrectly interleaved. You'd have to use stdbuf -oL (where available like on GNU or FreeBSD) on them to work around that (you may still have problems with very long lines (typically >4KiB)) or have each write their output in a separate file and concatenate them all in the end.

Here, the string you're looking for is fixed (not a regexp) so using the -F option might make a difference (unlikely as grep implementations know how to optimise that already).

Another thing that could make a big difference is fixing the locale to C if you're in a multi-byte locale:

find / -type f -print0 | LC_ALL=C xargs -r0 -P2 grep -Hi 'the brown dog'

To avoid looking inside /proc, /sys..., use -xdev and specify the file systems you want to search in:

LC_ALL=C find / /home -xdev -type f -exec grep -i 'the brown dog' /dev/null {} +

Or prune the paths you want to exclude explicitly:

LC_ALL=C find / \( -path /dev -o -path /proc -o -path /sys \) -prune -o \
  -type f -exec grep -i 'the brown dog' /dev/null {} +
6
  • 1
    I don't suppose someone can point me at a resource - or explain - what {} and + mean. There's nothing I can see in the man pages for exec, grep or find on the Solaris box i'm using. Is is just the shell concatenating filenames and passing them to grep?
    – user13757
    Commented Nov 6, 2014 at 13:18
  • 3
    @Poldie, that's clearly explained at the description of the -exec predicate in the Solaris man page Commented Nov 6, 2014 at 13:31
  • Ah, yes. I wasn't escaping my { char whilst searching within the man page. Your link is better; I find man pages terrible to read.
    – user13757
    Commented Nov 6, 2014 at 13:47
  • 1
    RAID1 w/ 3 disks? How odd ...
    – tink
    Commented Jul 8, 2016 at 18:53
  • 1
    @tink, yes RAID1 is on 2 or more disks. With 3 disks compared to 2 disks, you increase redundancy and read performance while write performance is roughly the same. With 3 disks as opposed to 2, that means you can also correct errors, as when a bit flips on one of the copies, you're able to tell which is right by checking all 3 copies while with 2 disks, you can't really tell. Commented Jul 15, 2016 at 7:50
17

If the * in the grep call is not important to you then the first should be more efficient as only one instance of grep is started, and forks aren't free. In most cases it will be faster even with the * but in edge cases the sorting could reverse that.

There may be other find-grep structures which work better especially with many small files. Reading big amounts of file entries and inodes at once may give a performance improvement on rotating media.

But let's have a look at the syscall statistics:

find

> strace -cf find . -type f -exec grep -i -r 'the brown dog' {} \;
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 97.86    0.883000        3619       244           wait4
  0.53    0.004809           1      9318      4658 open
  0.46    0.004165           1      6875           mmap
  0.28    0.002555           3       977       732 execve
  0.19    0.001677           2       980       735 stat
  0.15    0.001366           1      1966           mprotect
  0.09    0.000837           0      1820           read
  0.09    0.000784           0      5647           close
  0.07    0.000604           0      5215           fstat
  0.06    0.000537           1       493           munmap
  0.05    0.000465           2       244           clone
  0.04    0.000356           1       245       245 access
  0.03    0.000287           2       134           newfstatat
  0.03    0.000235           1       312           openat
  0.02    0.000193           0       743           brk
  0.01    0.000082           0       245           arch_prctl
  0.01    0.000050           0       134           getdents
  0.00    0.000045           0       245           futex
  0.00    0.000041           0       491           rt_sigaction
  0.00    0.000041           0       246           getrlimit
  0.00    0.000040           0       489       244 ioctl
  0.00    0.000038           0       591           fcntl
  0.00    0.000028           0       204       188 lseek
  0.00    0.000024           0       489           set_robust_list
  0.00    0.000013           0       245           rt_sigprocmask
  0.00    0.000012           0       245           set_tid_address
  0.00    0.000000           0         1           uname
  0.00    0.000000           0       245           fchdir
  0.00    0.000000           0         2         1 statfs
------ ----------- ----------- --------- --------- ----------------
100.00    0.902284                 39085      6803 total

grep only

> strace -cf grep -r -i 'the brown dog' .
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 40.00    0.000304           2       134           getdents
 31.71    0.000241           0       533           read
 18.82    0.000143           0       319         6 openat
  4.08    0.000031           4         8           mprotect
  3.29    0.000025           0       199       193 lseek
  2.11    0.000016           0       401           close
  0.00    0.000000           0        38        19 open
  0.00    0.000000           0         6         3 stat
  0.00    0.000000           0       333           fstat
  0.00    0.000000           0        32           mmap
  0.00    0.000000           0         4           munmap
  0.00    0.000000           0         6           brk
  0.00    0.000000           0         2           rt_sigaction
  0.00    0.000000           0         1           rt_sigprocmask
  0.00    0.000000           0       245       244 ioctl
  0.00    0.000000           0         1         1 access
  0.00    0.000000           0         1           execve
  0.00    0.000000           0       471           fcntl
  0.00    0.000000           0         1           getrlimit
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         1           futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0       132           newfstatat
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.000760                  2871       466 total
3
  • 2
    On the scale of searching an entire filesystem, forks are negligible. I/O is what you want to reduce. Commented May 22, 2014 at 23:03
  • Though it is an error from the OP, the comparison is incorrect, you should remove the -r flag of grep when using find. You can see that it searched over and over again the same files by comparing the number of open that happened.
    – qwertzguy
    Commented Jun 11, 2015 at 18:03
  • 1
    @qwertzguy, no, the -r should be harmless since the -type f guarantees none of the arguments are directories. The multiple open()s are more likely down to the other files opened by grep at each invocation (libraries, localisation data...) (thanks for the edit on my answer btw) Commented Aug 19, 2015 at 22:05
8

If you're on an SSD and seek time is negligble, you could use GNU parallel:

find /path -type f | parallel --gnu --workdir "$PWD" -j 8 '
    grep -i -r 'the brown dog' {} 
'

This will execute up to 8 grep processes at the same time based on what find found.

This will thrash a hard disk drive, but an SSD should cope pretty well with it.

0

One more thing to consider on this one is as follows.

Will any of the directories that grep will have to recursively go through contain more files than your system's nofile setting? (e.g. number of open file handles, default is 1024 on most linux distros)

If so, then find is definitely the way to go since certain versions of grep will bomb out with an Argument list too long error when it hits a directory with more files than the maximum open file handles setting.

Just my 2¢.

2
  • 1
    Why would grep bomb out? At least with GNU grep if you give a path with trailing / and use -R it'll simply iterate through the directories. The shell isn't going to expand anything unless you give shell-globs. So in the given example (/*) only the contents of / matter, not of the subfolders which will be simply enumerated by grep, not passed as argument from the shell. Commented May 23, 2014 at 18:38
  • Well, considering the OP was asking about searching recursively (e.g. "grep -r -i 'the brown dog' /*"), I have seen GNU's grep (at least Version 2.9) bomb out with :"-bash: /bin/grep: Argument list too long" using the exact search the OP used on a directory that had over 140,000 sub-directories in it.
    – B.Kaatz
    Commented Apr 12, 2016 at 20:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .