23

If I have a big file containing many zeros, how can i efficiently make it a sparse file?

Is the only possibility to read the whole file (including all zeroes, which may patrially be stored sparse) and to rewrite it to a new file using seek to skip the zero areas?

Or is there a possibility to make this in an existing file (e.g. File.setSparse(long start, long end))?

I'm looking for a solution in Java or some Linux commands, Filesystem will be ext3 or similar.

11
  • 4
    The first solution is implemented in 'cp --sparse=always', but that is not efficient and requires copying the file and moving afterwards.
    – rurouni
    Commented May 13, 2011 at 8:39
  • 1
    stackoverflow.com/questions/245251/…
    – joe776
    Commented May 13, 2011 at 8:41
  • 2
    @joe: that is about creating a sparse file from scratch, but I want ta make an existing file sparse.
    – rurouni
    Commented May 13, 2011 at 8:45
  • 1
    @runouni, If the holes are large enough, perhaps it is worth breaking up the file and using the filesystem to delete/remove sections. Commented May 13, 2011 at 9:15
  • 1
    Making a file sparse would result in those sections being fragmented if they were ever re-used. I think you would be better off pre-allocating the whole file and maintaining a table/BitSet of the pages/sections which are occupied. Perhaps saving a few TB of disk space is not worth the performance hit of a highly fragmented file. Commented May 13, 2011 at 9:21

5 Answers 5

33

A lot's changed in 8 years.

Fallocate

fallocate -d filename can be used to punch holes in existing files. From the fallocate(1) man page:

-d, --dig-holes
  Detect and dig holes.  This makes the file sparse in-place,
  without using extra disk space.  The minimum size of the hole
  depends on filesystem I/O block size (usually 4096 bytes).
  Also, when using this option, --keep-size is implied.  If no
  range is specified by --offset and --length, then the entire
  file is analyzed for holes.

  You can think of this option as doing a "cp --sparse" and then
  renaming the destination file to the original, without the
  need for extra disk space.

  See --punch-hole for a list of supported filesystems.

(That list:)

Supported for XFS (since Linux 2.6.38), ext4 (since Linux
3.0), Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).

tmpfs being on that list is the one I find most interesting. The filesystem itself is efficient enough to only consume as much RAM as it needs to store its contents, but making the contents sparse can potentially increase that efficiency even further.

GNU cp

Additionally, somewhere along the way GNU cp gained an understanding of sparse files. Quoting the cp(1) man page regarding its default mode, --sparse=auto:

sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well.

But there's also --sparse=always, which activates the file-copy equivalent of what fallocate -d does in-place:

Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes.

I've finally been able to retire my tar cpSf - SOURCE | (cd DESTDIR && tar xpSf -) one-liner, which for 20 years was my graybeard way of copying sparse files with their sparseness preserved.

1
  • 2
    Thank you. Your hint for GNU cp helped me. It works fast where other tools (e.g. rsync --sparse) were slow.
    – dsteinkopf
    Commented Oct 18, 2019 at 2:28
4

Some filesystems on Linux / UNIX have the ability to "punch holes" into an existing file. See:

It's not very portable and not done the same way across the board; as of right now, I believe Java's IO libraries do not provide an interface for this.

If hole punching is available either via fcntl(F_FREESP) or via any other mechanism, it should be significantly faster than a copy/seek loop.

2
  • do you know if there is a tool applying this to a file as I'm not an experienced C hacker.
    – rurouni
    Commented May 13, 2011 at 11:13
  • In Linux, use the FALLOC_FL_PUNCH_HOLE flag in fallocate.
    – pcworld
    Commented May 27, 2021 at 20:45
1

You can use $ truncate -s filename filesize on linux teminal to create sparse file having

only metadata.

NOTE --Filesize is in bytes.

1
  • 5
    Two problems here: (1) Your arguments are backwards, it should be truncate -s size filename. (size can actually be in any specified units, e.g. 10K = 10240 bytes, 2MB = 2000000 bytes). (2) The question asks about making an existing file sparse, whereas this will only create a new sparse file (or extend an existing file with a sparse region at the end).
    – FeRD
    Commented Jan 29, 2019 at 11:09
0

According to this article, it seems there is currently no easy solution, except for using FIEMAP ioctl. However, I don't know how you can make "non sparse" zero blocks into "sparse" ones.

0

I think you would be better off pre-allocating the whole file and maintaining a table/BitSet of the pages/sections which are occupied.

Making a file sparse would result in those sections being fragmented if they were ever re-used. Perhaps saving a few TB of disk space is not worth the performance hit of a highly fragmented file.

Not the answer you're looking for? Browse other questions tagged or ask your own question.