1

I am using ZFS with compression+dedup on my host system and use Qemu/KVM for virtualization. I wanted to let ZFS do it's job and disable Windows 10's NTFS compression feature (both, CompactOS and NTFS FS compression). However, things didn't work as expected, so I did some more testing as shown below.

TL;DR How can NTFS compression really be disabled? Windows seems to ignore settings and GPO.... o.O

It turns out, there was no issue but just confusion about du's output on ZFS systems. See accepted answer.

Test

1 Disable compression

  1. fsutil behavior set disablecompression 1
  2. compact /compactos:never (this shouldn't affect this test, but let's disable it anyways)
  3. Set "do not allow compression on NTFS volumes" via GPO
  4. reboot

2 Create test data

uncompressable:

$ dd if=/dev/urandom of=uncomp.bin bs=1M count=5500
$ dd if=/dev/urandom of=uncomp2.bin bs=1M count=3000
# I had Ctrl-C'ed somewhere so I finally ended up with these sizes:
# du -scm uncomp*
2861    uncomp2.bin
3642    uncomp.bin
6503    total
# check
$ tar -cf - uncomp* | lz4 >uncomp.comp
$ du -m uncomp.comp
6503    uncomp.comp

and compressable:

$ lzdgen -o comp -r 3.0 -s 6503m
# check
$ cat comp | lz4 >comp.comp
$ du -h comp.comp
2,2G    comp.comp

3 Create raw test disks

$ qemu-img create -f raw testdisk1.img 10G
$ qemu-img create -f raw testdisk.img 10G

Attached both disks via VirtIO to the Windows VM, initialized and formatted them as NTFS volumes.

4 Copy test data to test disk

Copy uncomp* to testdisk and comp to testdisk1 and check disk usage.

testdisk testdisk

Both volumes show the same disk usage, as expected.

5 Check raw disk image sizes

With NTFS compression being disabled I would expect the two raw disk images having exactly the same size, however, the (compressable) data still gets compressed:

$ du -h testdisk*
2,2G    testdisk1.img
6,4G    testdisk.img

$ qemu-img info testdisk.img 
image: testdisk.img
file format: raw
virtual size: 10 GiB (10737418240 bytes)
disk size: 6.35 GiB

$ qemu-img info testdisk1.img 
image: testdisk1.img
file format: raw
virtual size: 10 GiB (10737418240 bytes)
disk size: 2.17 GiB

6 Check data size of testdisk1 when mounted

Just to be sure ...

$ sudo qemu-nbd -c /dev/nbd0 testdisk1.img
$ mkdir /tmp/mnt
$ sudo mount /dev/nbd0p2 /tmp/mnt
$ df -h
...
/dev/nbd0p2                               10G  6,4G  3,6G  64% /tmp/mnt
$ du -sch /tmp/mnt
6,4G    /tmp/mnt
6,4G    total

Update 1

After verifying testdisk1 (thanks for the hint @DanielB!), I realized that lzdgen creates nullbyte-holes, which is pretty bad for testing. Thus, I've redone the test using a file created this way:

./fio --name=randomwrite --ioengine=sync --rw=readwrite --bs=4k --size=5600M --buffer_compress_percentage=50 --refill_buffers --buffer_pattern=0xdeadbeef --filename=comp2

Instead of nullbyte-holes, the pattern 0xdeadbeef is used. The file is perfectly compressable:

$du -h comp2
5,5G    comp2
$ cat comp2 | lz4 >yyy
$ du -h yyy
2,8G    yyy

After copying the file comp2 to the (freshly recreated) testdisk1, the raw image looks like that:

$ du -h testdisk1.img 
3,0G    testdisk1.img

Using hexedit I searched for the beginning of the file and dumped it using dd:

dd if=testdisk1.img of=comp2_dump skip=3367040 bs=1k count=5734400
5734400+0 records in
5734400+0 records out
5872025600 bytes (5,9 GB, 5,5 GiB) copied, 48,2982 s, 122 MB/s
c0d3@z3r0:[~]# du -sch comp2_dump
3,0G    comp2_dump
3,0G    total
$ sha1sum comp2_dump comp2 
22171686b58536d50c8b516ee859213650cb6b55  comp2_dump
22171686b58536d50c8b516ee859213650cb6b55  comp2

Wat....

4
  • remind, that a file that has been already flagged for compressed with ntfs, will stay compacted
    – djdomi
    Commented Jan 3, 2023 at 19:17
  • 1
    NTFS compression (not specific to Windows 10 btw) is not magically active, especially on files the user creates. It is an attribute that you must manually enable on files. So don’t guess whether the files are compressed or not, check.
    – Daniel B
    Commented Jan 3, 2023 at 19:36
  • @djdomi the files are not flagged, as they were created freshly, as you can see ;)
    – c0d3z3r0
    Commented Jan 3, 2023 at 21:46
  • @DanielB Ok, checked and found out two things: a) lzdgen uses holes which is pretty bad for testing, so I've tested again with fio generating non-zero holes (0xdeadbeef instead) b) even though the file gets written to the disk as-is (non-compressed!) and I can extract it from testdisk1 without decompression, du shows 3.0G.... weird
    – c0d3z3r0
    Commented Jan 3, 2023 at 22:35

1 Answer 1

2

Ok, so I finally found out what's going on here. du seems to show the compressed/deduped size (ZFS). I've copied the file comp2 to a zfs dataset without compression/dedup and du shows the correct 5,5G:

$ sudo zfs create -o mountpoint=/wtf rpool/zroot/wtf
$ sudo cp comp2 /wtf/
$ zfs list rpool/zroot/wtf
NAME              USED  AVAIL     REFER  MOUNTPOINT
rpool/zroot/wtf  5.47G   224G     5.47G  /wtf
$ du -h /wtf/comp2 
5,5G    /wtf/comp2

The same happens with testdisk1:

$ sudo rm /wtf/comp2
$ sudo cp testdisk1.img /wtf/
$ zfs list rpool/zroot/wtf
NAME              USED  AVAIL     REFER  MOUNTPOINT
rpool/zroot/wtf  5.49G   224G     5.49G  /wtf
$ du -h /wtf/testdisk1.img
5,5G    /wtf/testdisk1.img
1
  • 1
    Now edit question and accept answer so it makes sense and is useful to others. Commented Jan 4, 2023 at 1:38

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .