Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: net/http: configurable Server.WriteBufferSize #13870

Open
felixhao opened this issue Jan 8, 2016 · 9 comments
Open

proposal: net/http: configurable Server.WriteBufferSize #13870

felixhao opened this issue Jan 8, 2016 · 9 comments
Labels
Milestone

Comments

@felixhao
Copy link

felixhao commented Jan 8, 2016

Hi, in our most cases, the http written bytes more than 4<<10, so we need set bufio Read/Write buffer size and connection SNDBUF/RCVBUF ?

Also we think this change is appropriate, does go plan to support this?

@bradfitz
Copy link
Contributor

bradfitz commented Jan 8, 2016

Can you report any performance numbers with different buffer sizes?

@bradfitz bradfitz added this to the Unplanned milestone Jan 8, 2016
@felixhao
Copy link
Author

felixhao commented Jan 9, 2016

yeah, there are sample test result below, we written 10kb bytes.
env: Debian GNU/Linux 8.2 4core 4g ram

test code

func BenchmarkBigWrite(b *testing.B) {
    s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        w.Write(bigBs) // bigBs is 10kb bytes
    }))
    defer s.Close()
    b.SetParallelism(100)
    b.ResetTimer()
    b.RunParallel(func(pb *testing.PB) {
        for pb.Next() {
            res, err := http.Get(s.URL)
            if err != nil {
                log.Fatal(err)
            }
            _, err = ioutil.ReadAll(res.Body)
            res.Body.Close()
            if err != nil {
                log.Fatal(err)
            }
        }
    })
}

standard net/http, three times

go test -test.bench=".*" -benchmem -benchtime=5s
BenchmarkBigWrite-4   100000         79756 ns/op       37929 B/op         72 allocs/op
ok      mytest/httpbuf  8.857s
BenchmarkBigWrite-4   100000         78570 ns/op       37948 B/op         72 allocs/op
ok      mytest/httpbuf  8.665s
BenchmarkBigWrite-4   100000         79072 ns/op       37876 B/op         72 allocs/op
ok      mytest/httpbuf  8.718s

change http/server.go 479L

bw := newBufioWriterSize(checkConnErrorWriter{c}, 4<<10)

to

bw := newBufioWriterSize(checkConnErrorWriter{c}, 10<<10)

go test -test.bench=".*" -benchmem -benchtime=5s
BenchmarkBigWrite-4   100000         69645 ns/op       39890 B/op         73 allocs/op
ok      mytest/httpbuf  7.692s
BenchmarkBigWrite-4   100000         69816 ns/op       39961 B/op         73 allocs/op
ok      mytest/httpbuf  7.702s
BenchmarkBigWrite-4   100000         67768 ns/op       39856 B/op         73 allocs/op
ok      mytest/httpbuf  7.516s
@nhooyr
Copy link
Contributor

nhooyr commented Sep 29, 2018

Related: #22618

@nhooyr
Copy link
Contributor

nhooyr commented May 17, 2019

You could also just wrap the response writer with your own bufio.Writer. More allocation but less exposed API which I think is a fair tradeoff.

@nhooyr
Copy link
Contributor

nhooyr commented Jun 7, 2020

Also, the ability to adjust the Transport's buffer size was only added because it performs the io.Copy so you can't adjust the buffer size yourself.

With the server however, you can just wrap like I mentioned above.

@bolkedebruin
Copy link

Hello! I'm the implementor of a server that tunnels remote desktop connections over websocket (in this case Gorilla). I was facing some performance challenges particularly when high latency high bandwidth connections were involved.

I fired up Wireshark to see on which end the data flow was restricted. It turned out that

  • the bandwidth of each connection was quite steady
    
  • there was no significant amount of dropped packets / retries, so probably not limited by congestion control
    
  • the advertised window in the ACK packets coming back from a client was sufficiently generous (around 300KB)
    
  • however, when sending there were always just around ~4kB of data in flight. Given the high latency to the clients, the server spent most of the time waiting for data to be ACKed by clients, and then immediately sent out a burst of new packets, then proceeded to wait again for enough outstanding bytes to be ACKed.
    

This very likely is due to a small TCP send buffer, since this would limit the ammount of outstanding bytes that the TCP stack could keep track of.

For high bandwidth high latency connections it is extremely beneficial to have a large(r) TCP recv buffer on the OS level that I need to be able to set per client (i.e. in case it is not a high latency). Wrapping it in bufio as was suggested is not sufficient as the OS will just do as it wants.

So the need is to be able to set the OS level receive buffer / send buffer (not the internal buffer with bufio) per connection which is not exposed in the API

@hmh
Copy link

hmh commented Apr 16, 2021

For TCP high-throughput, you must set large-enough TCP buffers before the socket connects, because the relevant TCP parameters (window scale) are set once on the initial TCP handshake and immutable afterwards.

It should be possible nowadays in Go without resorting to hackish, hideous workarounds. But I am sorry I can't point you do documentation on how to best do it.

@oddmario
Copy link

Hello 👋

I have made a patch that adds a configurable WriteBufferSize option to the http.Server struct. This should solve this issue for now

Feel free to check it out at https://github.com/oddmario/go-http-server-custom-write-buffer-patch :)

@neild
Copy link
Contributor

neild commented Jul 8, 2024

I tried reproducing the benchmark results from #13870 (comment), but in my test changing the write buffer size had no observable impact:

$ benchstat /tmp/bench.[01]
goos: linux
goarch: amd64
pkg: _
cpu: AMD EPYC 7B12
           │ /tmp/bench.0 │         /tmp/bench.1         │
           │    sec/op    │   sec/op     vs base         │
BigWrite-8    59.48µ ± 7%   57.11µ ± 3%  ~ (p=0.065 n=8)

A bufio.Writer will skip the buffer entirely and pass through the write when writing a large chunk of data. As such, I'd expect the impact of changing the buffer size to be minimal when making a single large write from a handler, as in this benchmark.

Of course, that's just my test system; maybe there's a more observable difference elsewhere or with a different test. But I'd like to better understand when changing the write buffer size has an impact and why.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
8 participants