1108

How can I recursively count files in a Linux directory?

I found this:

find DIR_NAME -type f ¦ wc -l

But when I run this it returns the following error.

find: paths must precede expression: ¦

4
  • 93
    You are confusing the broken bar ¦ (ASCII 166) with the vertical bar | (ASCII 124) used for UNIX pipeline. Commented Jan 11, 2014 at 13:14
  • 13
    @SkippyleGrandGourou Isn't it called a pipe? Commented Apr 14, 2015 at 13:25
  • 39
    @DaveStephens Yes, it's also called that. It's also called a Sheffer stroke, verti-bar, vbar, stick, vertical line, vertical slash, bar, obelisk, glidus.
    – Emil Laine
    Commented Apr 22, 2015 at 0:10
  • 29
    In RFC20 it's called "vertical line". "Pipe" is the name of the shell operator, rather than the name of the symbol. Just as * is the "asterisk" ASCII character, but "times" in some other contexts.
    – slim
    Commented Jul 7, 2017 at 9:42

26 Answers 26

1883

This should work:

find DIR_NAME -type f | wc -l

Explanation:

  • -type f to include only files.
  • | (and not ¦) redirects find command's standard output to wc command's standard input.
  • wc (short for word count) counts newlines, words and bytes on its input (docs).
  • -l to count just newlines.

Notes:

  • Replace DIR_NAME with . to execute the command in the current folder.
  • You can also remove the -type f to include directories (and symlinks) in the count.
  • It's possible this command will overcount if filenames can contain newline characters.

Explanation of why your example does not work:

In the command you showed, you do not use the "Pipe" (|) to kind-of connect two commands, but the broken bar (¦) which the shell does not recognize as a command or something similar. That's why you get that error message.

14
  • Aoeu the problem is not the find command it is the amount of files you are asking it to handle. I just ran the command in a directory containing 95,000 files in 30 subdirectories. It took only a few seconds on my laptop.
    – Avec
    Commented Feb 25, 2014 at 21:52
  • 5
    If there is any possibility that file names contain the newline character you might want to use the -print0 flag. Commented Oct 29, 2014 at 13:34
  • 2
    @gaboroncancio That's not going to help, unless some implementation of wc has an option to read a null terminated list. See my answer for an alternative. Commented Mar 16, 2015 at 1:34
  • 5
    If your files have newlines in them, you can still use find to do it by using an -exec instead of a print: find . -type f -exec echo \; | wc -l. In this way, you are not actually outputting the filenames, but you are outputting a single blank line per file encountered, regardless of the name, so the line count will work in any case. print0 can also work if you just count null characters: find . -type f -print0 | tr -dc '\0' | wc -c. In this case, tr deletes all non-null characters and wc counts the characters fed into it.
    – Taywee
    Commented Apr 26, 2017 at 23:36
  • 1
    Note: You can add -name "*.ext" to find to count the number of files of a specific extension type. Commented Aug 18, 2017 at 4:13
143

For the current directory:

find -type f | wc -l
1
  • 8
    This solution does not take filename that contain newlines into account. Commented May 19, 2018 at 10:03
111

If you want a breakdown of how many files are in each dir under your current dir:

for i in */ .*/ ; do 
    echo -n $i": " ; 
    (find "$i" -type f | wc -l) ; 
done

That can go all on one line, of course. The parenthesis clarify whose output wc -l is supposed to be watching (find $i -type f in this case).

5
  • 11
    It could get stuck on directories with spaces in their names. Changing the first line to find . -maxdepth 1 -type d -print0 | while IFS= read -r -d '' i ; do fixes it. See How can I read a file (data stream, variable) line-by-line (and/or field-by-field)? Commented Aug 16, 2017 at 10:26
  • 4
    Using find for the outer loop is just a needless complication. for i in */; do`
    – tripleee
    Commented Jun 12, 2018 at 19:36
  • 2
    function countit { for i in $(find . -maxdepth 1 -type d) ; do file_count=$(find $i -type f | wc -l) ; echo "$file_count: $i" ; done }; countit | sort -n -r
    – Schneems
    Commented Dec 12, 2018 at 20:23
  • Finally this is what I needed. My folders have thousands of files so printing them with tree or anything else is not an option Commented Jun 6, 2019 at 20:03
  • This includes ../ and doesn't seem to go forward — meaning it's not regressive. Commented Jan 31, 2020 at 13:37
109

On my computer, rsync is a little bit faster than find | wc -l in the accepted answer:

$ rsync --stats --dry-run -ax /path/to/dir /tmp

Number of files: 173076
Number of files transferred: 150481
Total file size: 8414946241 bytes
Total transferred file size: 8414932602 bytes

The second line has the number of files, 150,481 in the above example. As a bonus you get the total size as well (in bytes).

Remarks:

  • the first line is a count of files, directories, symlinks, etc all together, that's why it is bigger than the second line.
  • the --dry-run (or -n for short) option is important to not actually transfer the files!
  • I used the -x option to "don't cross filesystem boundaries", which means if you execute it for / and you have external hard disks attached, it will only count the files on the root partition.
6
  • 1
    I like your idea of using rsync here. I'd never have thought about it!
    – Qeole
    Commented Aug 3, 2016 at 15:55
  • 1
    Thanks @Qeole, the idea is not mine though. I read it several years ago somewhere that rsync is the fastest to delete a folder with lots of files and subfolders, so I thought it might be quickly to count files as well.
    – psmith
    Commented Aug 22, 2016 at 14:01
  • 6
    Tried this. After running both twice beforehand to populate the fs cache, find ~ -type f | wc -l took 1.7/0.5/1.33 seconds (real/user/sys). rsync --stats --dry-run -ax ~ /xxx took 4.4/3.1/2.1 seconds. That's for about 500,000 files on SSD.
    – slim
    Commented Jul 7, 2017 at 9:58
  • Dunno what version of rsync you used, but in 3.1.2 it's a little easier to read: Number of files: 487 (reg: 295, dir: 192)
    – mpen
    Commented Nov 10, 2017 at 2:16
  • I used the default rsync on macOS: rsync version 2.6.9 protocol version 29
    – psmith
    Commented Nov 10, 2017 at 3:34
67

You can use

$ tree

after installing the tree package with

$ sudo apt-get install tree

(on a Debian / Mint / Ubuntu Linux machine).

The command shows not only the count of the files, but also the count of the directories, separately. The option -L can be used to specify the maximum display level (which, by default, is the maximum depth of the directory tree).

Hidden files can be included too by supplying the -a option .

7
  • 4
    This is actually the simplest way to see number of directories and files.
    – XYZ
    Commented Jan 22, 2016 at 4:23
  • 14
    From the man page: By default tree does not print hidden files. You have to supply the -a option to include them.
    – eee
    Commented Mar 28, 2016 at 16:12
  • 3
    To install this on macOS, use brew and run brew install tree, preferable after running brew update. Commented Apr 11, 2018 at 14:58
  • 5
    It's also printing all the filenames, so it will be slow if you have many files. Commented Apr 24, 2018 at 18:45
  • 3
    Wow, very nice tool, it can print folders colorized, list only folders, output as JSON. It can list 34k folders and 51k files in very few seconds. Olé!
    – brasofilo
    Commented Jan 10, 2019 at 5:46
57

Since filenames in UNIX may contain newlines (yes, newlines), wc -l might count too many files. I would print a dot for every file and then count the dots:

find DIR_NAME -type f -printf "." | wc -c

Note: The -printf option does only work with find from GNU findutils. You may need to install it, on a Mac for example.

7
  • 1
    Looks like this is the only solution that handles files with newlines in their names. Upvoted. Commented Nov 12, 2018 at 22:28
  • 4
    hihi :) I love newlines in filenames. That makes them just more readable.
    – hek2mgl
    Commented Nov 12, 2018 at 22:30
  • I mean, newlines in the file names not the content! Commented Nov 12, 2018 at 22:31
  • 1
    I was just joking... Yeah, newlines in filenames always have to be taken into account. They could come from malicious content or less spectacular, from a typo.
    – hek2mgl
    Commented Nov 12, 2018 at 22:32
  • 1
    This will not work for every find. On OSX, you need to install GNU Find, for example, brew install findutils.
    – TA_intern
    Commented May 12, 2021 at 7:07
25

Combining several of the answers here together, the most useful solution seems to be:

find . -maxdepth 1 -type d -print0 |
xargs -0 -I {} sh -c 'echo -e $(find "{}" -printf "\n" | wc -l) "{}"' |
sort -n

It can handle odd things like file names that include spaces parenthesis and even new lines. It also sorts the output by the number of files.

You can increase the number after -maxdepth to get sub directories counted too. Keep in mind that this can potentially take a long time, particularly if you have a highly nested directory structure in combination with a high -maxdepth number.

2
  • What's with the echo -e? I guess you put it in to fold any newlines, but it will also mangle any other irregular whitespace, and attempt to expand any wildcard characters present verbatim in the file names. I'd go simply with something like find .* * -type d -execdir sh -c 'find . -type f -printf "\n" | wc -l; pwd' and live with any aberrations in the output, or maybe play with Bash's printf "%q" for printing the directory name.
    – tripleee
    Commented Jun 8, 2019 at 9:02
  • this is the best answer for doing more than one dir at a time and capturing dirs with white space!
    – ikwyl6
    Commented Jun 16, 2020 at 1:21
18

If you want to know how many files and sub-directories exist from the present working directory you can use this one-liner

find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find {} | wc -l) {}' | sort -n

This will work in GNU flavour, and just omit the -e from the echo command for BSD linux (e.g. OSX).

3
  • 3
    Excellent solution! The only issue I found was directories with spaces or special characters. Add quotes where the dir name is used: find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find "{}" | wc -l) "{}"' | sort -n
    – John Kary
    Commented May 21, 2015 at 21:09
  • 2
    I've modified it a bit and it works quite well for me: find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo $(find {} | wc -l) \\t {}' | sort -rn | less
    – Wizek
    Commented Dec 1, 2015 at 3:04
  • My comments on @Sebastian's answer apply here too. The use of echo -e (or just ` echo` as in the preceding comment) on an unquoted directory name trades one problem for another.
    – tripleee
    Commented Jun 8, 2019 at 9:04
17

You can use the command ncdu. It will recursively count how many files a Linux directory contains. Here is an example of output:

enter image description here

It has a progress bar, which is convenient if you have many files:

enter image description here

To install it on Ubuntu:

sudo apt-get install -y ncdu

Benchmark: I used https://archive.org/details/cv_corpus_v1.tar (380390 files, 11 GB) as the folder where one has to count the number of files.

  • find . -type f | wc -l: around 1m20s to complete
  • ncdu: around 1m20s to complete
13
  • 1
    That mainly calculates the disk usage, not the number of files. This additional overhead is likely not wanted. (besides the need to install an additional package for something that can be done with standard POSIX utilities)
    – hek2mgl
    Commented Apr 24, 2018 at 19:12
  • 3
    @hek2mgl I added a reproducible benchmark in the answer, I ran it twice and I didn't see any difference between find . -type f | wc -l and ncdu. Commented Apr 24, 2018 at 19:52
  • 2
    yes, looks like find is under the hood executing more or less the same system calls as du which is the backend for ncdu. Just straced them.
    – hek2mgl
    Commented Apr 24, 2018 at 20:05
  • 1
    @FranckDernoncourt loved it. I've ton of files in a folder and having a progress bar is life saver. Thanks for sharing!
    – Geek
    Commented May 22, 2018 at 0:45
  • 1
    Press c inside ncdu to enable a display of recursive file counts next to the file/folder size. Discovered this today by pressing ? (as it says on the top) Commented Aug 8, 2022 at 21:26
17

If what you need is to count a specific file type recursively, you can do:

find YOUR_PATH -name '*.html' -type f | wc -l 

-l is just to display the number of lines in the output.

If you need to exclude certain folders, use -not -path

find . -not -path './node_modules/*' -name '*.js' -type f | wc -l
1
  • The extension is part of the filename and may not represent the file TYPE
    – Waxhead
    Commented Jan 12, 2019 at 11:23
13
tree $DIR_PATH | tail -1

Sample Output:

5309 directories, 2122 files

1
  • This is the simplest solution that produces (almost) the precise information requested. The only thing closer for this solution would be to pipe it through cut -d',' -f2.
    – SunSplat
    Commented Aug 15, 2021 at 22:50
9

If you want to avoid error cases, don't allow wc -l to see files with newlines (which it will count as 2+ files)

e.g. Consider a case where we have a single file with a single EOL character in it

> mkdir emptydir && cd emptydir
> touch $'file with EOL(\n) character in it'
> find -type f
./file with EOL(?) character in it
> find -type f | wc -l
2

Since at least gnu wc does not appear to have an option to read/count a null terminated list (except from a file), the easiest solution would just be to not pass it filenames, but a static output each time a file is found, e.g. in the same directory as above

> find -type f -exec printf '\n' \; | wc -l
1

Or if your find supports it

> find -type f -printf '\n' | wc -l
1 
5

To determine how many files there are in the current directory, put in ls -1 | wc -l. This uses wc to do a count of the number of lines (-l) in the output of ls -1. It doesn't count dotfiles. Please note that ls -l (that's an "L" rather than a "1" as in the previous examples) which I used in previous versions of this HOWTO will actually give you a file count one greater than the actual count. Thanks to Kam Nejad for this point.

If you want to count only files and NOT include symbolic links (just an example of what else you could do), you could use ls -l | grep -v ^l | wc -l (that's an "L" not a "1" this time, we want a "long" listing here). grep checks for any line beginning with "l" (indicating a link), and discards that line (-v).

Relative speed: "ls -1 /usr/bin/ | wc -l" takes about 1.03 seconds on an unloaded 486SX25 (/usr/bin/ on this machine has 355 files). "ls -l /usr/bin/ | grep -v ^l | wc -l" takes about 1.19 seconds.

Source: http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x700.html

4
  • 3
    ls -l must do stat syscall on every file to read its size, mtime and other properties, which is slow. On big directories (100.000+ files) running ls -l can take several minutes. So to only count files, always use ls -1 | wc -l.
    – Marki555
    Commented Nov 13, 2014 at 21:19
  • A 486SX25, nice
    – cam8001
    Commented Oct 5, 2017 at 1:47
  • ls -1 can still be slow in large directories, because it has to sort the files. Simply printf '%s\n' * does the same thing, and avoids the external ls call (which is problematic anyhow) but the most efficient soluton is to use a command which doesn't perform any sorting, such as find. (The glob output is sorted by the shell.)
    – tripleee
    Commented Jun 8, 2019 at 11:22
  • When I do this with only one file in a folder the answer is 2. Commented Dec 1, 2021 at 5:53
4

For directories with spaces in the name ... (based on various answers above) -- recursively print directory name with number of files within:

find . -mindepth 1 -type d -print0 | while IFS= read -r -d '' i ; do echo -n $i": " ; ls -p "$i" | grep -v / | wc -l ; done

Example (formatted for readability):

pwd
  /mnt/Vancouver/Programming/scripts/claws/corpus

ls -l
  total 8
  drwxr-xr-x 2 victoria victoria 4096 Mar 28 15:02 'Catabolism - Autophagy; Phagosomes; Mitophagy'
  drwxr-xr-x 3 victoria victoria 4096 Mar 29 16:04 'Catabolism - Lysosomes'

ls 'Catabolism - Autophagy; Phagosomes; Mitophagy'/ | wc -l
  138

## 2 dir (one with 28 files; other with 1 file):
ls 'Catabolism - Lysosomes'/ | wc -l
  29

The directory structure is better visualized using tree:

tree -L 3 -F .
  .
  ├── Catabolism - Autophagy; Phagosomes; Mitophagy/
  │   ├── 1
  │   ├── 10
  │   ├── [ ... SNIP! (138 files, total) ... ]
  │   ├── 98
  │   └── 99
  └── Catabolism - Lysosomes/
      ├── 1
      ├── 10
      ├── [ ... SNIP! (28 files, total) ... ]
      ├── 8
      ├── 9
      └── aaa/
          └── bbb

  3 directories, 167 files

man find | grep mindep
  -mindepth levels
    Do not apply any tests or actions at levels less than levels
    (a non-negative integer).  -mindepth 1 means process all files
    except the starting-points.

ls -p | grep -v / (used below) is from answer 2 at https://unix.stackexchange.com/questions/48492/list-only-regular-files-but-not-directories-in-current-directory

find . -mindepth 1 -type d -print0 | while IFS= read -r -d '' i ; do echo -n $i": " ; ls -p "$i" | grep -v / | wc -l ; done
./Catabolism - Autophagy; Phagosomes; Mitophagy: 138
./Catabolism - Lysosomes: 28
./Catabolism - Lysosomes/aaa: 1

Applcation: I want to find the max number of files among several hundred directories (all depth = 1) [output below again formatted for readability]:

date; pwd
    Fri Mar 29 20:08:08 PDT 2019
    /home/victoria/Mail/2_RESEARCH - NEWS

time find . -mindepth 1 -type d -print0 | while IFS= read -r -d '' i ; do echo -n $i": " ; ls -p "$i" | grep -v / | wc -l ; done > ../../aaa
    0:00.03

[victoria@victoria 2_RESEARCH - NEWS]$ head -n5 ../../aaa
    ./RNA - Exosomes: 26
    ./Cellular Signaling - Receptors: 213
    ./Catabolism - Autophagy; Phagosomes; Mitophagy: 138
    ./Stress - Physiological, Cellular - General: 261
    ./Ancient DNA; Ancient Protein: 34

[victoria@victoria 2_RESEARCH - NEWS]$ sed -r 's/(^.*): ([0-9]{1,8}$)/\2: \1/g' ../../aaa | sort -V | (head; echo ''; tail)

    0: ./Genomics - Gene Drive
    1: ./Causality; Causal Relationships
    1: ./Cloning
    1: ./GenMAPP 2
    1: ./Pathway Interaction Database
    1: ./Wasps
    2: ./Cellular Signaling - Ras-MAPK Pathway
    2: ./Cell Death - Ferroptosis
    2: ./Diet - Apples
    2: ./Environment - Waste Management

    988: ./Genomics - PPM (Personalized & Precision Medicine)
    1113: ./Microbes - Pathogens, Parasites
    1418: ./Health - Female
    1420: ./Immunity, Inflammation - General
    1522: ./Science, Research - Miscellaneous
    1797: ./Genomics
    1910: ./Neuroscience, Neurobiology
    2740: ./Genomics - Functional
    3943: ./Cancer
    4375: ./Health - Disease 

sort -V is a natural sort. ... So, my max number of files in any of those (Claws Mail) directories is 4375 files. If I left-pad (https://stackoverflow.com/a/55409116/1904943) those filenames -- they are all named numerically, starting with 1, in each directory -- and pad to 5 total digits, I should be ok.


Addendum

Find the total number of files, subdirectories in a directory.

$ date; pwd
Tue 14 May 2019 04:08:31 PM PDT
/home/victoria/Mail/2_RESEARCH - NEWS

$ ls | head; echo; ls | tail
Acoustics
Ageing
Ageing - Calorie (Dietary) Restriction
Ageing - Senescence
Agriculture, Aquaculture, Fisheries
Ancient DNA; Ancient Protein
Anthropology, Archaeology
Ants
Archaeology
ARO-Relevant Literature, News

Transcriptome - CAGE
Transcriptome - FISSEQ
Transcriptome - RNA-seq
Translational Science, Medicine
Transposons
USACEHR-Relevant Literature
Vaccines
Vision, Eyes, Sight
Wasps
Women in Science, Medicine

$ find . -type f | wc -l
70214    ## files

$ find . -type d | wc -l
417      ## subdirectories
4

With bash:

Create an array of entries with ( ) and get the count with #.

FILES=(./*); echo ${#FILES[@]}

Ok that doesn't recursively count files but I wanted to show the simple option first. A common use case might be for creating rollover backups of a file. This will create logfile.1, logfile.2, logfile.3 etc.

CNT=(./logfile*); mv logfile logfile.${#CNT[@]}

Recursive count with bash 4+ globstar enabled (as mentioned by @tripleee)

FILES=(**/*); echo ${#FILES[@]}

To get the count of files recursively we can still use find in the same way.

FILES=(`find . -type f`); echo ${#FILES[@]}
2
  • Modern shells support **/* for recursive enumeration. It's still less efficient than find on large directories because the shell has to sort the files in each directory.
    – tripleee
    Commented Jun 8, 2019 at 12:16
  • Storing the whole search in an Bash array just to count it later is rather inefficient and can eat up a lot of memory until the enumeration completes. For very large directory trees this can be a real problem. Commented Jan 25, 2021 at 0:51
4

find . -type f -name '*.fileextension' | wc -l

replace the . with the directory path and file extension with the real extension. For example if you are looking for all png files, you replace it with *.png

2

There are many correct answers here. Here's another!

find . -type f | sort | uniq -w 10 -c

where . is the folder to look in and 10 is the number of characters by which to group the directory.

2

I have written ffcnt to speed up recursive file counting under specific circumstances: rotational disks and filesystems that support extent mapping.

It can be an order of magnitude faster than ls or find based approaches, but YMMV.

2

suppose you want a per directory total files, try:

for d in `find YOUR_SUBDIR_HERE -type d`; do 
   printf "$d - files > "
   find $d -type f | wc -l
done

for current dir try this:

for d in `find . -type d`; do printf "$d - files > "; find $d -type f | wc -l; done;

if you have long space names you need change IFS, like this:

OIFS=$IFS; IFS=$'\n'
for d in `find . -type d`; do printf "$d - files > "; find $d -type f | wc -l; done
IFS=$OIFS
2

We can use tree command it displays all the files and folders recursively. As well as it displays count of folders and files in last line of output.

$ tree path/to/folder/
path/to/folder/
├── a-first.html
├── b-second.html
├── subfolder
│   ├── readme.html
│   ├── code.cpp
│   └── code.h
└── z-last-file.html

1 directories, 6 files

For only last line of output in tree command we can use tail command on it's output

$ tree path/to/folder/ | tail -1
1 directories, 6 files

for installing tree we can use below command

$ sudo apt-get install tree
0

This alternate approach with filtering for format counts all available grub kernel modules:

ls -l /boot/grub/*.mod | wc -l
0

Based on the responses given above and comments, I've came up with the following file count listing. Especially it's a combination of the solution provided by @Greg Bell, with comments from @Arch Stanton & @Schneems

Count all files in the current directory & subdirectories

function countit { find . -maxdepth 1000000 -type d -print0 | while IFS= read -r -d '' i ; do file_count=$(find "$i" -type f | wc -l) ; echo "$file_count: $i" ; done }; countit | sort -n -r >file-count.txt

Count all files of given name in the current directory & subdirectories

function countit { find . -maxdepth 1000000 -type d -print0 | while IFS= read -r -d '' i ; do file_count=$(find "$i" -type f | grep <enter_filename_here> | wc -l) ; echo "$file_count: $i" ; done }; countit | sort -n -r >file-with-name-count.txt
0

The following solution is especially useful for SSDs (as it is designed to run fast on them):

One can use gdu. It will recursively count how many files a Linux directory contains. Here is an example of output (demo by dundee):

enter image description here

To install on Ubuntu:

sudo add-apt-repository ppa:daniel-milde/gdu
sudo apt-get update
sudo apt-get install gdu

See the installation page for other OSes and ways how to install Gdu.

From the readme:

Gdu is intended primarily for SSD disks where it can fully utilize parallel processing. However HDDs work as well, but the performance gain is not so huge.

The readme points to similar programs:

  • ncdu - NCurses based tool written in pure C (LTS) or zig (Stable)

  • godu - Analyzer with a carousel like user interface

  • dua - Tool written in Rust with interface similar to gdu (and ncdu)

  • diskus - Very simple but very fast tool written in Rust

  • duc - Collection of tools with many possibilities for inspecting and visualising disk usage

  • dust - Tool written in Rust showing tree like structures of disk usage

  • pdu - Tool written in Rust showing tree like structures of disk usage

-1

find -type f | wc -l

OR (If directory is current directory)

find . -type f | wc -l

2
  • This duplicates at least one other answer to this same question. Commented May 19, 2018 at 10:06
  • It is also wrong since find -type f and find . -type f are equivalent. By default, find searches files in the current directory, so this answer is both repetitive and wrong. Commented Jul 26, 2022 at 9:13
-2

This will work completely fine. Simple short. If you want to count the number of files present in a folder.

ls | wc -l
1
  • 4
    First of all, this does not answer the question. The question is about recursively counting files from a directory forward and the command you show does not do that. furthermore, with ls you are counting directories as well as files. Also, there is no reason to answer an old question if you are not going to add anything new and are not even going to read the question properly. Please refrain from doing so.
    – XFCC
    Commented Apr 10, 2018 at 14:39
-3
ls -l | grep -e -x -e -dr | wc -l 
  1. long list
  2. filter files and dirs
  3. count the filtered line no

Not the answer you're looking for? Browse other questions tagged or ask your own question.