1

I'm having an issue on a Xen VM where (running as root) df and baobab agree that 94% of my disk is used (25G out of 28G), but du only counts a fraction of total disk utilization (3.3G).

The server has a simple LVM configuration: a 28G partition mounted to /. lvdisplay and vgdisplay both show that the entire volume is accounted for.

How am I missing almost 22G worth of space?

df Output
=========
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_dns-lv_root
                  29241444  25924244   1831788  94% /

df -h Output
============
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_dns-lv_root
                   28G   25G  1.8G  94% /

`du --max-depth=1 -h` Output
============================
96K     ./tmp
128K    ./home
23M     ./root
...
94M     ./etc
4.0K    ./.pulse
3.4G    .
1

3 Answers 3

2

If you delete (unlink) a file which is kept open by a process, you'll see the usage disappearing from du, but still in use by df. As soon as the last process closes the file (which happens latest when that process exits), the space used by the file will be deallocated and available in df as well.

5
  • This is a named server. Is there a possibility that named is taking up all that space? Uptime is...wow...743 days. Commented Sep 16, 2014 at 14:14
  • @WilliamCarterBaller is rebooting an option? If the problem is a hung file, rebooting should clear it out.
    – terdon
    Commented Sep 16, 2014 at 14:30
  • You can check for deleted files with "lsof -n | grep deleted". As a bonus you also see what process is keeping those files open. You can empty such files by doing "> /proc/$process_id/fd/$fd", $process_id is the 2nd column, $fd is the 5th column.
    – wurtel
    Commented Sep 16, 2014 at 14:37
  • Wow @wurtel! You're totally right. Output of lsof -n | grep deleted gave: rsyslogd 943 root 1w REG 253,0 20649662640 122793 /var/log/messages.20140731 (deleted). 20649662640b translates to 20.64G. Restarting rsyslog fixe the issue! Thank you!! Commented Sep 16, 2014 at 15:03
  • It is life-saving to know that you may not be able to free disk space by simply deleting large (and growing) files in an emergency disk full situation, you have to make sure that the process writing into that file also closes the file. If that's not an option, instead of deleting the file, you should choose to truncate its contents (:> filename). Commented Sep 16, 2014 at 15:16
0

Linux keeps a buffer space, that can only be used by root. Running sudo tune2fs -l /dev/sda1 will show the number of reserved blocks in its output.

To turn off reserved blocks altogether just use the following command:

sudo tune2fs -m 0 /dev/sda1
2
  • tune2fs -l /dev/mapper/vg_dns-lv_root gives the following: Filesystem OS type: Linux Inode count: 1859584 Block count: 7427072 Reserved block count: 371353 Free blocks: 829324 Free inodes: 1736734 ... Commented Sep 16, 2014 at 14:11
  • That means Reserved Block Count is 371353, or 0.177074909G total? Commented Sep 16, 2014 at 14:22
0

@wurtel's information fixed it.

Output of `lsof -n | grep deleted`
================================== 
rsyslogd 943 root 1w REG 253,0 20649662640 122793 /var/log/messages.20140731 (deleted). 

20649662640b translates to 20.64G. Restarting rsyslog fixe the issue!

Thank you everyone!!

1
  • kill -hup of the rsyslogd process would have fixed it as well, that tells rsyslogd to close and reopen its logfiles. Now go figure out why the logfile rotation was performed but not the signalling :-)
    – wurtel
    Commented Sep 16, 2014 at 16:06

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .