The man page for du
right in the short description says it estimates file space usage. There are many filesystem features that can make its output inaccurate.
For sparse files, block size would always be smaller than apparent size and apparent size would grossly overestimate storage size.
For nearly all other cases, apparent size would not take into account block granularity storage of files and metadata blocks and thus would underestimate storage size of a file.
The only things I can think of that would make du overestimate number of blocks would be specific to the underlying storage filesystem and would include:
- block packing, storing multiple files in a single block.
- filesystem dedup and copy on write
Both of these would cause du to overestimate the number of blocks used, but only the former would not also increase apparent size.
So for du apparent size to be closer than du block size to df output, maybe the underlying filesystem is doing block packing making block counts useless for size estimates. Knowing which filesystem (df -T
on the server) would help determine if this is likely.
The other possibility would be if du is basing its block estimate on larger blocks (e.g., 8k) than what is actually being used (eg., 512b), but this seems unlikely as it would be something the filesystem could report correctly.
df -BM --output=fstype,used,target
of the underlying filesystem (not the NFS mount) might be useful