I can't find any information on whether buffering is already implicitly done out of the box when one is writing a file with either fprintf
or fwrite
. I understand that this might be implementation/platform dependent feature. What I'm interested in, is whether I can at least expect it to be implemented efficiently on modern popular platforms such as Windows, Linux, or Mac OS X?
AFAIK, usually buffering for I/O routines is done on 2 levels:
- Library level: this could be C standard library, or Java SDK (
BufferedOutputStream
), etc.; - OS level: modern platforms extensively cache/buffer I/O operations.
My question is about #1, not #2 (as I know it's already true). In other words, can I expect C standard library implementations for all modern platforms to take advantage of buffering?
If not, then is manually creating a buffer (with cleverly chosen size) and flushing it on overflow a good solution to the problem?
Conclusion
Thanks to everyone who pointed out functions like setbuf
and setvbuf
. These are the exact evidence that I was looking for to answer my question. Useful extract:
All files are opened with a default allocated buffer (fully buffered) if they are known to not refer to an interactive device. This function can be used to either set a specific memory block to be used as buffer or to disable buffering for the stream.
The default streams
stdin
andstdout
are fully buffered by default if they are known to not refer to an interactive device. Otherwise, they may either be line buffered or unbuffered by default, depending on the system and library implementation. The same is true forstderr
, which is always either line buffered or unbuffered by default.
setbuf
etc., as well.