432

Let's say I want to get the size of each directory of a Linux file system. When I use ls -la I don't really get the summarized size of the folders.

If I use df I get the size of each mounted file system but that also doesn't help me. And with du I get the size of each subdirectory and the summary of the whole file system.

But I want to have only the summarized size of each directory within the ROOT folder of the file system. Is there any command to achieve that?

1

9 Answers 9

625

This does what you're looking for:

du -sh /*

What this means:

  • -s to give only the total for each command line argument.
  • -h for human-readable suffixes like M for megabytes and G for gigabytes (optional).
  • /* simply expands to all directories (and files) in /.

    Note: dotfiles are not included; run shopt -s dotglob to include those too.

Also useful is sorting by size:

du -sh /* | sort -h

Here:

  • -h ensures that sort interprets the human-readable suffixes correctly.
14
  • 9
    If you have dot-directories in the root directory, you can use shopt -s dotglob to include them in the count.
    – Philipp
    Commented Jul 11, 2010 at 21:55
  • 14
    It's very usefull, because it's simple and you can place what path you want instead of /*, e.g. ./ for current directory or ./* for each item in current directory.
    – psur
    Commented Aug 9, 2012 at 6:22
  • 5
    @psur or you can use ./*/ to get only subfolders and not all items
    – relascope
    Commented Jan 28, 2016 at 23:48
  • 11
    Sorted version: du -sh /* | sort -h
    – Vu Anh
    Commented Jul 28, 2017 at 3:24
  • 5
    @c1phr If your sort doesn't have -h, you need to leave it off from du as well, otherwise the sorting will mix up kilo/mega/gigabytes. du -s /* | sort -nr.
    – Thomas
    Commented Aug 20, 2018 at 12:30
102

I often need to find the biggest directories, so to get a sorted list containing the 20 biggest dirs I do this:

du -m /some/path | sort -nr | head -n 20

In this case the sizes will be reported in megabytes.

5
  • 17
    Here's a way to get it more readable du -sh /some/path | sort -hr | head -n 20
    – Xedecimal
    Commented Jul 8, 2013 at 15:49
  • 8
    @Xedecima the problem with using h is the sort doesn't know how to handle different sizes. For example 268K is sorted higher than 255M, and both are sorted higher than 2.7G
    – chrisan
    Commented Dec 27, 2013 at 18:33
  • 5
    The -h (human readable) argument on the 'sort' command should properly read these values. Just like du's -h flag exports them. Depending on what you're running I'm guessing.
    – Xedecimal
    Commented Apr 14, 2014 at 18:02
  • 1
    Works in Ubuntu 16.04. Nice tip.
    – SDsolar
    Commented Mar 4, 2018 at 17:15
  • 2
    sudo du -haxt 1G / | sort -hr | head -30
    – deviant
    Commented Oct 5, 2018 at 10:03
34

I like to use Ncdu for that, you can use the cursor to navigate and drill down through the directory structure it works really well.

1
  • 2
    Awesome. Keywords: du meets ncurses. You can use b to drop into a shell in the directory. Commented Jan 31, 2019 at 15:28
20

The existing answers are very helpful, maybe some beginner (like me) will find this helpful as well.

  1. Very basic loop, but for me this was a good start for some other size related operations:

    for each in $(ls) ; do du -hs "$each" ; done
    
  2. Very similar to the first answer and nearly the same result as 1.), but it took me some time to understand the difference of * to ./* if in a subdirectory:

    du -sh ./*
    
4
  • for each does not work as it appends console characters (eg \033[) to the list of folders Commented Feb 6, 2019 at 11:57
  • @machineaddict not sure what you mean. I use this all the time, works for me just fine.
    – Martin
    Commented Feb 6, 2019 at 12:57
  • try to run your command starting with for each. it will not work Commented Feb 6, 2019 at 14:34
  • i run the command exactly as written here. starting with for each. works.
    – Martin
    Commented Feb 7, 2019 at 14:29
10

The following du invocation should work on BSD systems:

du -d 1 /
6
  • 2
    My du (Ubuntu 10.4) doesn't have a -d option. What system are you on?
    – Thomas
    Commented Jul 11, 2010 at 17:30
  • 1
    On my openSUSE it doesn't have a -d option either :(
    – 2ndkauboy
    Commented Jul 11, 2010 at 17:33
  • 1
    OK, then it's a BSD option only (I'm on OS X).
    – Philipp
    Commented Jul 11, 2010 at 17:37
  • 1
    Right portable option combination on BSD/*NIX is du -sk /*. I hate the -k stuff soooo much. Linux' -h totally rocks.
    – Dummy00001
    Commented Jul 11, 2010 at 19:46
  • 1
    in other systems, its --max-depth Commented Sep 1, 2015 at 12:49
7

This isn't easy. The du command either shows files and folders (default) or just the sizes of all items which you specify on the command line (option -s).

To get the largest items (files and folders), sorted, with human readable sizes on Linux:

du -h | sort -h

This will bury you in a ton of small files. You can get rid of them with --threshold (1 MB in my example):

du --threshold=1M -h | sort -h

The advantage of this command is that it includes hidden dot folders (folders which start with .).

If you really just want the folders, you need to use find but this can be very, very slow since du will have to scan many folders several times:

find . -type d -print0 | sort -z | xargs --null -I '{}' du -sh '{}' | sort -h
8
  • 2
    --threshold ^^^ this option is not availavle on linux
    – podarok
    Commented Oct 6, 2015 at 12:30
  • 2
    @podarok It's available on OpenSUSE 13.2 Linux. Try to find a more recent version of your distribution or compile a more recent version of the package yourself. Commented Oct 7, 2015 at 8:51
  • 1
    It doen't work on Ubuntu LTS (14.04). It is the most recent one ))
    – podarok
    Commented Oct 7, 2015 at 13:16
  • 1
    @podarok Which version of GNU coreutils? Mine is 8.24. Commented Oct 8, 2015 at 7:33
  • 2
    Caching might have been a bad term. I was thinking of something like done in this port superuser.com/a/597173/121352 where we scan the disks contents once into a mapping and then continue using data from that mapping rather than hitting the disk again.
    – Hennes
    Commented Jan 12, 2016 at 14:09
2

You might also want to check out xdiskusage. Will give you the same information, but shown graphically, plus allows to drill down (very useful). There are other similar utilities for KDE and even Windows.

2

Be aware, that you can't compare directories with du on different systems/machines without getting sure, both share the same blocksize of the filesystem. This might count if you rsync some files from a linux machine to a nas and you want to compare the synced directory on your own. You might get different results with du because of different blocksizes....

1

You could use ls in conjunction with awk:

ls -al * | awk 'BEGIN {tot=0;} {tot = tot + $5;} END {printf ("%.2fMb\n",tot/1024/1024);}'

The output of ls is piped to awk. awk starts processing the data. Standard delimiter is space. The sum variable tot is initialised to zero; the following statement is executed for each row/line outputted by ls. It merely increments tot with the size. $5 stands for fifth column (outputted by ls). At the end we divide by (1024*1024) to sum in megabytes.

If you would convert this into a script or function (.bashrc) you can also use it to get the size of certain subsets of directories, according to filetypes.

If you want system wide information, kdirstat may came in handy!

1
  • 1
    I agree one can expand this example and do tricks like getting the size of "certain subsets of directories, according to filetypes" etc.; it may seem a good starting point. Nevertheless this solution is flawed from the start. To every user who would like to use this method I recommend reading answers and comments to this question as well as the article linked there. I don't say you cannot do it at all. Know the limitations, that's all. Commented Dec 22, 2016 at 18:42

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .