13

My machine is running ubuntu 10.10, and I'm using the standard gnu C library. I was under the impression that printf flushed the buffer if there was a newline described in the format string, however the following code repeatedly seemed to buck that trend. Could someone clarify why the buffer is not being flushed.

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/wait.h>

int main()
{
    int rc;
    close(1);
    close(2);
    printf("HI 1\n");
    fprintf(stderr, "ERROR\n");

    open("newfile.txt", O_WRONLY | O_CREAT | O_TRUNC, 0600);
    printf("WHAT?\n");
    fprintf(stderr, "I SAID ERROR\n");

    rc = fork();

    if (rc == 0)
    {
        printf("SAY AGAIN?\n");
        fprintf(stderr, "ERROR ERROR\n");
    }
    else
    {
        wait(NULL);
    }

    printf("BYE\n");
    fprintf(stderr, "HI 2\n");

    return 0;
}

The contents of newfile.txt after running this program is as follows.

HI 1
WHAT?
SAY AGAIN?
BYE
HI 1
WHAT?
BYE

3 Answers 3

29

No, the standard says that stdout is initially fully buffered if the output device can be determined to be a non-interactive one.

It means that, if you redirect stdout to a file, it won't flush on newline. If you want to try and force it to line-buffered, use setbuf or setvbuf.

The relevant part of C99, 7.19.3 Files, paragraph 7, states:

At program startup, three text streams are predefined and need not be opened explicitly - standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.

Just keep in mind section 5.1.2.3/6:

What constitutes an interactive device is implementation-defined.

1
  • Thanks, I needed to know exactly when it was buffered and when it wasn't. Thanks a lot. Commented Mar 8, 2011 at 6:33
3

It is flushed if the output device is an interactive one e.g., a terminal.

You have to flush the output buffer in case the output device can be determined to be non-interactive e.g., a file. New line does not do that automatically.

For details see paxdiablo's answer.

1
  • 1
    It does when I'm outputting to the terminal. See paxdiablo's answer. Commented Mar 8, 2011 at 6:36
2

You've got a strange sense of humor. :)

int main()
{
    int rc;
    close(1);
    close(2);
    printf("HI 1\n");
    fprintf(stderr, "ERROR\n");

You close the filedescriptors used for stdout and stderr, and then immediately try to use the C stdout and stderr FILE streams. Not a great idea, I'm not sure what the C library will do to report the error to you but crashing would be one acceptable possibility.

That oddity aside, when you're using the standard IO stream functions to write, the buffering depends in part upon the destination. If you're writing to a terminal, then usual behavior is line buffering. If you're writing to a pipe, a file, or a socket, then the usual behavior is block buffering. You can change the buffering behavior with the setvbuf(3) function. Full details of the buffering behavior are in the manpage.

5
  • It's not a sense of humour, I'm trying to see how the C library does things, and so I put those lines there to see what would happen. Interestingly the program does not crash, but simply does not output anything. It could of course be outputting the data to a black hole of some sort. I'm not sure how things work really. Commented Mar 8, 2011 at 6:36
  • 2
    Crashing is not an acceptable possibility. close is a POSIX function, and POSIX specifies what happens in the case the file descriptor associated with a FILE is not valid: EBADF. Commented Mar 8, 2011 at 6:38
  • Thanks @R.. I read manpages for a few minutes to figure out what it does, but couldn't spot the consequences of fighting against libc. :)
    – sarnold
    Commented Mar 8, 2011 at 6:40
  • 2
    Closing the underlying file descriptor is actually a trick I once developed as a way of "forcing" a FILE to enter "error status" (ferror returning non-zero) in code that had no other way of indicating errors to the caller except via the status of the FILE passed. Actually I used dup2 to save the old open file, shuffle a read-only /dev/null descriptor in its place, write and flush to generate an error, then put things back in order and returned - this made it thread-safe and avoided messing up the caller's state. Commented Mar 8, 2011 at 6:55
  • @R.. Hah! That's damned clever. :D
    – sarnold
    Commented Mar 8, 2011 at 7:00

Not the answer you're looking for? Browse other questions tagged or ask your own question.