SlideShare a Scribd company logo
Docker network performance in
the public cloud
Arjan Schaaf - Luminis Technologies
container.camp London
September 11th 2015
Cloud RTI
• Luminis Technologies
• Founded in The Netherlands
• amdatu.com PAAS
• both public and private clouds
• cloud provider independent
Cloud RTI
• CoreOS
• Docker
• Kubernetes
• Load balancing, Data Stores, ELK
• High available, scalable applications with centralised
logging, monitoring and metrics
Choose your cloud wisely
• comparing cloud VM’s based on price or hardware
specification isn’t enough
• cloud providers throttle their VM’s differently
• don’t trust specifications on ‘paper’
Azure vs AWS
AZURE AWS
INSTANCE TYPE PRICE NETWORK INSTANCE TYPE PRICE NETWORK
A0 $0,018 5 Mbps t2.micro $0,014 Low to Moderate
A1 $0,051 100 Mbps t2.medium $0,056 Low to Moderate
D1 $0,084 unknown m4.large $0,139 Moderate
D2 $0,168 unknown m4.xlarge $0,278 High
A8 $1,97 40Gbit/s InfiniBand m4.10xlarge $2,78 10 Gbit
Native Network Test Setup
• qperf: short running test
• iperf3: longer running test using parallel connections
qperf server container
docker run -dti -p 4000:4000 -p 4001:4001 arjanschaaf/centos-qperf -lp 4000
qperf client container
docker run -ti --rm arjanschaaf/centos-qperf <ip address> -lp 4000 -ip 4001 tcp_bw tcp_lat
iperf3 server container
docker run -dti -p 5201:5201 arjanschaaf/centos-iperf3--server
iperf3 client container
docker run -ti --rm arjanschaaf/centos-iperf3 -c <ip address> -t 300 -P 128
Native Network Test Results
Bandwidth
0
350
700
1050
1400
A0 A1 D1 D2 A8 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge
qperf iperf3
Native Network Test Results
Bandwidth
0
125
250
375
500
A0 A1 D1 D2 A8 t2.micro t2.medium m4.large m4.xlarge
qperf iperf3
Native Network Test Results
qperf latency
0
125
250
375
500
A0 A1 D1 D2 A8 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge
Docker Networking
• Connect containers over the host interface (use
ambassadors!)
• Use a SDN to connect your Docker cluster nodes
• weave
• flannel
• Project Calico
Before Docker 1.7
• Approach depended on the SDN
• replace the docker bridge
• proxy in front on docker daemon
Docker libnetwork
• Announced along with Docker 1.7 as an experimental
feature
• Networking Plugins: batteries included but swappable
• Included batteries are based on Socketplane
• Other plugins announced by: Weave, Project Calico,
Cisco, VMware and others
Choose your SDN wisely
• Functional features like encryption & DNS
• Support for libnetwork, kubernetes etc etc
• Implementations can be fundamentally different
• overlay networks like Flannel & Weave
• different overlay backend implementations (for example
UDP)
• L2/L3 based networks like Project Calico
Flannel
• Created by CoreOS
• Easy to setup
• Different backends
• UDP
• VXLAN
• AWS VPC (uses VPC routing table)
• GCE (uses Network routing table)
Weave
• Used Weave 1.0.3, 1.1 released this week
• DNS
• Proxy based approach
• Different backends
• pcap (default)
• VXLAN (fast-datapath-preview)
Project Calico
• Uses vRouters connected over BGP routes
• No additional overlay when running on a L2 or L3
network (think datacentre!)
• Won’t run on public clouds like AWS without a IPIP
tunnel
• Extensive and simple network policies (tenant isolation!)
• Very promising integration with Kubernetes
Docker Network Test Setup
• exactly the same as the “native” test but this time: use
the IP-address or DNS name of the container!
Docker Network Test Results
qperf bandwidth
0
350
700
1050
1400
t2.micro t2.medium m4.large m4.xlarge m4.10xlarge
native flannel UDP flannel vxlan weave pcap calico
Docker Network Test Results
qperf bandwidth
0
75
150
225
300
t2.micro t2.medium m4.large m4.xlarge
native flannel UDP flannel vxlan weave pcap calico
Docker Network Test Results
iperf3 bandwidth
0
300
600
900
1200
t2.micro t2.medium m4.large m4.xlarge m4.10xlarge
native flannel UDP flannel vxlan weave pcap calico
Docker Network Test Results
iperf3 bandwidth
0
25
50
75
100
t2.micro t2.medium m4.large m4.xlarge
native flannel UDP flannel vxlan weave pcap calico
Docker Network Test Results
qperf latency
0
100
200
300
400
t2.micro t2.medium m4.large m4.xlarge m4.10xlarge
native flannel UDP flannel vxlan weave pcap calico
Native vs SDN performance
INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO
IPERF IPERF IPERF IPERF
T2.MICRO -16% -2% -14% -14%
T2.MEDIUM -1% -1% -3% -3%
M4.LARGE -1% -1% -1% -1%
M4.XLARGE -0% -1% -1% -1%
M4.10XLARGE -55% -20% -79% -32%
Native vs SDN performance
& cpu load client + server
INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO
IPERF C S IPERF C S IPERF C S IPERF C S
T2.MICRO -16% 62,7% 29% -2% 11,7% 23,2% -14% 59,7% 89,5% -14% 26% 57%
T2.MEDIUM -1% 28,7% 20,2% -1% 20,6% 18,7% -3% 52,6% 33,1% -3% 17% 37%
M4.LARGE -1% 15,4% 12,7% -1% 10% 10% -1% 34,1% 24,8% -1% 21% 21%
M4.XLARGE -0% 9,4% 7,9% -1% 6,6% 7,3% -1% 22,9% 18,9% -1% 12% 10%
M4.10XLARGE -55% 2,8% 5,0% -20% 2,7% 3,4% -79% 14,8% 13,5% -32% 3% 4%
cpu load compared to native
test results
INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO
C S C S C S C S
T2.MEDIUM 95% 57% 40% 45% 258% 157% 15% 184%
M4.LARGE 108% 46% 35% 15% 361% 185% 177% 140%
M4.XLARGE 92% 44% 35% 33% 367% 244% 141% 82%
Conclusion
• Happy with choice for Flannel VXLAN
• Interested in Project Calico in combination with
Kubernetes
Conclusion
• synthetic tests are a great starting point
• don’t forget to validate the results with “real life” load
tests on your application(s)
Links
• http://weave.works
• http://blog.weave.works/2015/06/12/weave-fast-
datapath
• http://coreos.com/flannel
• http://www.projectcalico.org
• http://linux.die.net/man/1/qperf
• http://github.com/esnet/iperf
@arjanschaaf
arjanschaaf.github.io
www.luminis.eu

More Related Content

Docker network performance in the public cloud

  • 1. Docker network performance in the public cloud Arjan Schaaf - Luminis Technologies container.camp London September 11th 2015
  • 2. Cloud RTI • Luminis Technologies • Founded in The Netherlands • amdatu.com PAAS • both public and private clouds • cloud provider independent
  • 3. Cloud RTI • CoreOS • Docker • Kubernetes • Load balancing, Data Stores, ELK • High available, scalable applications with centralised logging, monitoring and metrics
  • 4. Choose your cloud wisely • comparing cloud VM’s based on price or hardware specification isn’t enough • cloud providers throttle their VM’s differently • don’t trust specifications on ‘paper’
  • 5. Azure vs AWS AZURE AWS INSTANCE TYPE PRICE NETWORK INSTANCE TYPE PRICE NETWORK A0 $0,018 5 Mbps t2.micro $0,014 Low to Moderate A1 $0,051 100 Mbps t2.medium $0,056 Low to Moderate D1 $0,084 unknown m4.large $0,139 Moderate D2 $0,168 unknown m4.xlarge $0,278 High A8 $1,97 40Gbit/s InfiniBand m4.10xlarge $2,78 10 Gbit
  • 6. Native Network Test Setup • qperf: short running test • iperf3: longer running test using parallel connections
  • 7. qperf server container docker run -dti -p 4000:4000 -p 4001:4001 arjanschaaf/centos-qperf -lp 4000
  • 8. qperf client container docker run -ti --rm arjanschaaf/centos-qperf <ip address> -lp 4000 -ip 4001 tcp_bw tcp_lat
  • 9. iperf3 server container docker run -dti -p 5201:5201 arjanschaaf/centos-iperf3--server
  • 10. iperf3 client container docker run -ti --rm arjanschaaf/centos-iperf3 -c <ip address> -t 300 -P 128
  • 11. Native Network Test Results Bandwidth 0 350 700 1050 1400 A0 A1 D1 D2 A8 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge qperf iperf3
  • 12. Native Network Test Results Bandwidth 0 125 250 375 500 A0 A1 D1 D2 A8 t2.micro t2.medium m4.large m4.xlarge qperf iperf3
  • 13. Native Network Test Results qperf latency 0 125 250 375 500 A0 A1 D1 D2 A8 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge
  • 14. Docker Networking • Connect containers over the host interface (use ambassadors!) • Use a SDN to connect your Docker cluster nodes • weave • flannel • Project Calico
  • 15. Before Docker 1.7 • Approach depended on the SDN • replace the docker bridge • proxy in front on docker daemon
  • 16. Docker libnetwork • Announced along with Docker 1.7 as an experimental feature • Networking Plugins: batteries included but swappable • Included batteries are based on Socketplane • Other plugins announced by: Weave, Project Calico, Cisco, VMware and others
  • 17. Choose your SDN wisely • Functional features like encryption & DNS • Support for libnetwork, kubernetes etc etc • Implementations can be fundamentally different • overlay networks like Flannel & Weave • different overlay backend implementations (for example UDP) • L2/L3 based networks like Project Calico
  • 18. Flannel • Created by CoreOS • Easy to setup • Different backends • UDP • VXLAN • AWS VPC (uses VPC routing table) • GCE (uses Network routing table)
  • 19. Weave • Used Weave 1.0.3, 1.1 released this week • DNS • Proxy based approach • Different backends • pcap (default) • VXLAN (fast-datapath-preview)
  • 20. Project Calico • Uses vRouters connected over BGP routes • No additional overlay when running on a L2 or L3 network (think datacentre!) • Won’t run on public clouds like AWS without a IPIP tunnel • Extensive and simple network policies (tenant isolation!) • Very promising integration with Kubernetes
  • 21. Docker Network Test Setup • exactly the same as the “native” test but this time: use the IP-address or DNS name of the container!
  • 22. Docker Network Test Results qperf bandwidth 0 350 700 1050 1400 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge native flannel UDP flannel vxlan weave pcap calico
  • 23. Docker Network Test Results qperf bandwidth 0 75 150 225 300 t2.micro t2.medium m4.large m4.xlarge native flannel UDP flannel vxlan weave pcap calico
  • 24. Docker Network Test Results iperf3 bandwidth 0 300 600 900 1200 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge native flannel UDP flannel vxlan weave pcap calico
  • 25. Docker Network Test Results iperf3 bandwidth 0 25 50 75 100 t2.micro t2.medium m4.large m4.xlarge native flannel UDP flannel vxlan weave pcap calico
  • 26. Docker Network Test Results qperf latency 0 100 200 300 400 t2.micro t2.medium m4.large m4.xlarge m4.10xlarge native flannel UDP flannel vxlan weave pcap calico
  • 27. Native vs SDN performance INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO IPERF IPERF IPERF IPERF T2.MICRO -16% -2% -14% -14% T2.MEDIUM -1% -1% -3% -3% M4.LARGE -1% -1% -1% -1% M4.XLARGE -0% -1% -1% -1% M4.10XLARGE -55% -20% -79% -32%
  • 28. Native vs SDN performance & cpu load client + server INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO IPERF C S IPERF C S IPERF C S IPERF C S T2.MICRO -16% 62,7% 29% -2% 11,7% 23,2% -14% 59,7% 89,5% -14% 26% 57% T2.MEDIUM -1% 28,7% 20,2% -1% 20,6% 18,7% -3% 52,6% 33,1% -3% 17% 37% M4.LARGE -1% 15,4% 12,7% -1% 10% 10% -1% 34,1% 24,8% -1% 21% 21% M4.XLARGE -0% 9,4% 7,9% -1% 6,6% 7,3% -1% 22,9% 18,9% -1% 12% 10% M4.10XLARGE -55% 2,8% 5,0% -20% 2,7% 3,4% -79% 14,8% 13,5% -32% 3% 4%
  • 29. cpu load compared to native test results INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO C S C S C S C S T2.MEDIUM 95% 57% 40% 45% 258% 157% 15% 184% M4.LARGE 108% 46% 35% 15% 361% 185% 177% 140% M4.XLARGE 92% 44% 35% 33% 367% 244% 141% 82%
  • 30. Conclusion • Happy with choice for Flannel VXLAN • Interested in Project Calico in combination with Kubernetes
  • 31. Conclusion • synthetic tests are a great starting point • don’t forget to validate the results with “real life” load tests on your application(s)
  • 32. Links • http://weave.works • http://blog.weave.works/2015/06/12/weave-fast- datapath • http://coreos.com/flannel • http://www.projectcalico.org • http://linux.die.net/man/1/qperf • http://github.com/esnet/iperf