Skip to main content
7 events
when toggle format what by license comment
Jun 18, 2023 at 4:09 audit Low quality posts
Jun 18, 2023 at 4:10
Jun 8, 2023 at 1:04 comment added Austin Hemmelgarn @Mokubai Pretty much all filesystems that use transparent compression, barring a few rare cases, work like that, NTFS included. That said, the compression block size is usually larger than the filesystem’s typical allocation unit, because the overhead is just too high otherwise. One of the interesting side effects of this is that random read/write performance is usually poor with transparent compression for small op sizes, but significantly better for large ops (because you have to [de]compress a full compression block regardless for the small ops).
Jun 7, 2023 at 19:44 comment added Joep van Steen @Mokubai, Yes 16 cluster blocks (max 4KB clusters). I recall from using RtlCompressBuffer in my NTFS file recovery tool to recover compressed files.
Jun 7, 2023 at 16:40 comment added Mokubai It would also be interesting to monitor actual disk space usage during the test so that you can get an idea of space savings compared to speed improvements.
Jun 7, 2023 at 16:21 comment added phuclv does CrystalDiskMark create a normal file on the NTFS partition or does it access file blocks directly?
Jun 7, 2023 at 15:50 comment added Mokubai I remember reading sometime in the past that NTFS compression is optimised to compress fixed blocks of data (disk clusters?) As quickly as possible and it wouldn't surprise me that it is independently compressing each block in a different thread. That way you could end up with incredibly high aggregate data rates, especially on modern multi-core processors. Multithreading compression would have significant benefits in data canters where high core counts are more common and could provide real boosts to data throughput as a result.
Jun 7, 2023 at 15:15 history answered Chris Betti CC BY-SA 4.0