3

I've got your usual static site, where the server grabs .html files and sends them.

I understand the importance of Transfer-Encoding: chunked for dynamic server pages, since that's what it was designed for. The speedup can be pretty incredible. But is it the same speed increase for static files? Do browsers already progressively render & fetch with requests that use Content-Length, as the file arrives over the wire?

I've got some seriously enormous HTML (documents in the hundred-page range), so progressive HTML processing would be crucial. (Kind of like how the WHATWG delivers the monolithic single-page HTML5 spec.)

1 Answer 1

5

Short answer: Yes, browsers do progressively render content sent with the Content-Length header. In fact, the browser does a bit less computing if it has the Content-Length header, since it knows up front how long the document is versus having to parse the document for the chunk information.

The Content-Length header (if any) must be sent before any of the content is sent. Therefore, the server must know the length of the document before sending any of the document content.

Chunked encoding is faster for dynamic content only. If a server could only use the Content-Length header, for dynamic content it would need to finish the document generation before sending out any content at all. This can cause the client to wait, possibly a long time without seeing any of the document.

Chunked encoding solves this by allowing the server to not have to send the Content-Length header.

1
  • 1
    Years later, I’ve also observed that even static file servers like NGiNX sometimes use chunked Transfer-Encoding; when they dynamically compress a file with gzip or whatever, they can’t know the final Content-Length once the the to-be-compressed file exceeds the first compression buffer.
    – Tigt
    Commented Nov 27, 2022 at 23:06

Not the answer you're looking for? Browse other questions tagged or ask your own question.