3

Using the Boost Libraries version 1.62.0 and the mapped_file_sink class from Boost.IOStreams.

I want to flush the written data to disk at will, but there is no mapped_file_sink::flush() member function.

My questions are:

  • How can I flush the written data when using mapped_file_sink?
  • If the above can't be done, why not, considering that msync() and FlushViewOfFile() are available for a portable implementation?

1 Answer 1

4

If you look at the mapped file support for proposed Boost.AFIO v2 at https://ned14.github.io/boost.afio/classboost_1_1afio_1_1v2__xxx_1_1map__handle.html, you'll notice a lack of ability to flush mapped file views as well.

The reason why is because it's redundant on modern unified page cache kernels when the mapped view is identical in every way to the page cached buffers for that file. msync() is therefore a no-op on such kernels because dirty pages are already queued for writing out to storage as and when the system decides it is appropriate. You can block your process until the system has finished writing out all the dirty pages for that file using good old fsync().

All the above does not apply where (a) your kernel is not a unified page cache design (QNX, NetBSD etc) or (b) your file resides on a networked file system. If you are in an (a) situation, best to simply avoid memory mapped i/o altogether, just do read() and write(), they are such a small percentage of OSs nowadays let them suffer with poor performance. For the (b) situation, you are highly inadvised to be using memory mapped i/o ever with networked file systems. There is an argument for read-only maps of immutable files only, otherwise just don't do it unless you know what you're doing. Fall back to read() and write(), it's safer and less likely to surprise.

Finally, you linked to a secure file deletion program. Those programs don't work reliably any more with recent file systems because of delayed extent allocation or copy on write allocation. In other words, when you rewrite a section of an existing file, it doesn't modify the original data on storage but actually allocates new storage and points the extents list for the file at the new linked list. This allows a consistent file system to be recovered after unexpected data loss easily. To securely delete data on recent file systems you usually need to use special OS APIs, though deleting all the files and then filling the free space with random data may securely delete most of the data in question most of the time. Note copy on write filing systems may not release freed extents back to the free space pool for new allocation for many days or weeks until the next time a garbage collection routine fires or a snapshot is deleted. In this situation, filling free space with randomness will not securely delete the files in question. If all this is a problem, use FAT32 as your filing system, it's very simple and rewriting data on it really does rewrite the same data on storage (though note that some storage media e.g. SSDs are highly likely to also not rewrite data, these also write modifications to new storage and garbage collect freed extents later).

2
  • I was hoping I could stay at the high-level, even though the SDelete documentation previously hinted to me I couldn't. And thanks for your answer: I usually don't "accept" single answers because it seems unnecessary; however since you've been down-voted by someone unknown, I might as well do it. Commented Dec 12, 2016 at 18:07
  • No doubt one of my many fans from boost-dev. It got downvoted shortly after I said there I'd replied here. The closest portable secure delete algorithm I know of is to delete all the files you want gone, create a file filling all free space, then if your device supports TRIM simply fsync, then delete the file. If TRIM isn't supported, fill with zeros, fsync, then delete. That should securely delete most of the data most of the time and on a TRIM capable device it's fairly quick too. Thanks for the answer accept. Commented Dec 13, 2016 at 16:41

Not the answer you're looking for? Browse other questions tagged or ask your own question.