SlideShare a Scribd company logo
Containing Container Chaos with
Kubernetes
Bret McGowen
Google
@bretmcg
Carter Morgan
Google
@_askcarter
Workshop setup: http://github.com/bretmcg/kubernetes-workshop
2@kubernetesio @bretmcg @_askcarter
Agenda
09:00 - 10:30 Containers and Kubernetes overview
10:30 - 10 :45 - BREAK
10:45 - 12:00 - Kubernetes 101
12:00 - 01:00 - Lunch!
01:00 - 02:30 - Kubernetes in Production
02:30 - 02:45 - BREAK
02:45 - 04:00 - Kubernetes in Production, cont’d
33
What’s in this for you...
44
Let's go back in time...
5
Shared machines
Chroots, ulimits, and nice
Noisy neighbors: a real problem
Limited our ability to share
The fleet got larger
Inefficiency hurts more at scale
Share harder!
ca. 2002 App-specific machine pools
Inefficient and painful to
manage
Good fences make good
neighbors
6
Everything we do is about
isolation
Namespacing is secondary
c.f. github.com/google/lmctfy
We evolved our system, made
mistakes, learned lessons
Docker
The time is right to share our
experiences, and to learn from
yours
ca. 2006 Google developed cgroups
Inescapable resource isolation
Enables better sharing
7
job hello_world = {
runtime = { cell = 'ic' } // Cell (cluster) to run in
binary = '.../hello_world_webserver' // Program to run
args = { port = '%port%' } // Command line parameters
requirements = { // Resource requirements
ram = 100M
disk = 100M
cpu = 0.1
}
replicas = 5 // Number of tasks
}
10000
Borg - Developer View
8
web browsers
BorgMaster
link shard
UI shardBorgMaster
link shard
UI shardBorgMaster
link shard
UI shardBorgMaster
link shard
UI shard
Scheduler
borgcfg web browsers
scheduler
Borglet Borglet Borglet Borglet
Config
file
BorgMaster
link shard
UI shard
persistent store
(Paxos)
Binary
Borg
What just
happened?
9
Hello
world!
Hello
world!
Hello
world!
Hello
world!Hello
world! Hello
world! Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world! Hello
world!
Hello
world!
Hello
world!
Hello
world!
Image by Connie
Zhou
Hello
world!
Hello
world!
Hello
world! Hello
world!
Hello
world! Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world! Hello
world!
Hello
world! Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world!
Hello
world! Hello
world!
Hello
world! Hello
world!
Hello
world!
Hello
world!
10
Developer View
11
Data center as one machine
Machines are just resource boundaries
12@kubernetesio @bretmcg @_askcarter
The App (Monolith)
nginx
monolith
13@kubernetesio @bretmcg @_askcarter
The App (Microservices)
nginx
helloauth
1414
Containers
15@kubernetesio @bretmcg @_askcarter
Old Way: Shared Machines
No isolation
No namespacing
Common libs
Highly coupled apps and OS
kernel
libs
app
app app
app
16@kubernetesio @bretmcg @_askcarter
Old Way: Virtual Machines
Some isolation
Inefficient
Still highly coupled to the guest OS
Hard to manage app
libs
kernel
libs
app app
kernel
app
libs
libs
kernel
kernel
17@kubernetesio @bretmcg @_askcarter
New Way: Containers
libs
app
kernel
libs
app
libs
app
libs
app
18@kubernetesio @bretmcg @_askcarter
But what ARE they?
Containers share the same operating system kernel
Container images are stateless and contain all dependencies
▪ static, portable binaries
▪ constructed from layered filesystems
Containers provide isolation (from each other and from the host)
Resources (CPU, RAM, Disk, etc.)
Users
Filesystem
Network
19
Why containers?
• Performance
• Repeatability
• Isolation
• Quality of service
• Accounting
• Portability
A fundamentally different way of
managing applications
late binding vs. early binding
Images by Connie
Zhou
2020
Packaging and Distributing Apps demo
2121
Lab
Workshop setup
and
Containerizing your application
http://github.com/bretmcg/kubernetes-workshop
2222
But that's just one machine!
Discovery
Scaling
Security
Monitoring Configuration
Scheduling
Health
23
We’ve been there...
23
Now that we have containers...
Isolation: Keep jobs from interfering with each other
Scheduling: Where should my job be run?
Lifecycle: Keep my job running
Discovery: Where is my job now?
Constituency: Who is part of my job?
Scale-up: Making my jobs bigger or smaller
Auth{n,z}: Who can do things to my job?
Monitoring: What’s happening with my job?
Health: How is my job feeling?
25@kubernetesio @bretmcg @_askcarter
Kubernetes
Manage applications, not machines
Open source, container orchestrator
Supports multiple cloud and bare-metal
environments
Inspired and informed by Google’s
experiences and internal systems
Design principles
Declarative > imperative: State your desired results, let the system actuate
Control loops: Observe, rectify, repeat
Simple > Complex: Try to do as little as possible
Modularity: Components, interfaces, & plugins
Legacy compatible: Requiring apps to change is a non-starter
Network-centric: IP addresses are cheap
No grouping: Labels are the only groups
Bulk > hand-crafted: Manage your workload in bulk
Open > Closed: Open Source, standards, REST, JSON, etc.
2727
Kubernetes Made Easy demo
2828
Pods
29@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
Pod
30@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
Pod
31@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
Pod
nginx
monolith
32@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
and volumes
Pod
nginx
monolith
33@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
and volumes
Pod
nginx
monolith
NFSiSCSIGCE
34@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
and volumes
• Shared namespaces
Pod
nginx
monolith
NFSiSCSIGCE
35@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
and volumes
• Shared namespaces
• One IP per pod
Pod
nginx
monolith
NFSiSCSIGCE
10.10.1.100
36@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
and volumes
• Shared namespaces
• One IP per pod
Pod
nginx
monolith
NFSiSCSIGCE
10.10.1.100
3737
Lab
Creating and managing pods
http://github.com/bretmcg/kubernetes-workshop
3838
Health checks
39@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet PodPod
app v1
40@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Hey, app v1... You alive?
Node
Kubelet Pod
app v1app v1
41@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet
Nope!
Pod
app v1app v1
42@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
OK, then I’m going to restart you...
Node
Kubelet Pod
app v1app v1
43@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Pod
44@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Pod
app v1
45@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet
Hey, app v1... You alive?
Pod
app v1
46@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet
Yes!
Pod
app v1
47@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Pod
app v1
4848
Lab
Monitoring and health checks
http://github.com/bretmcg/kubernetes-workshop
4949
Secrets
50@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubeletsecret
$ kubectl create secret generic tls-certs --from-file=tls/
51@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubeletpod
$ kubectl create -f pods/secure-monolith.yaml
52@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubelet
API
Server
Node
Kubelet Pod
Pod
53@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubelet
API
Server
Node
Kubelet Pod
Pod
secret
54@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubelet
API
Server
Node
Kubelet Pod
Pod
/etc/tls
secret
55@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubelet
Node
Kubelet Pod
Pod
/etc/tls/etc/tls
10.10.1.100
secret
API
Server
56@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubelet
API
Server
Node
Kubelet Pod
Pod
/etc/tls
nginx
10.10.1.100
secret
5757
Lab
Managing application configurations and secrets
http://github.com/bretmcg/kubernetes-workshop
5858
Services
59@kubernetesio @bretmcg @_askcarter
Services
Node1 Node3Node2
Pod
hello
Service
Pod
hello
Pod
hello
60@kubernetesio @bretmcg @_askcarter
Services
Persistent Endpoint for Pods
Node1 Node3Node2
Pod
hello
Service
Pod
hello
Pod
hello
61@kubernetesio @bretmcg @_askcarter
Services
Node1 Node3Node2
Pod
hello
Service
Pod
hello
Pod
hello
Persistent Endpoint for Pods
• Use Labels to
Select Pods
62@kubernetesio @bretmcg @_askcarter
Labels
Arbitrary meta-data attached
to Kubernetes object
Pod
hello
Pod
hello
labels:
version: v1
track: stable
labels:
version: v1
track: test
63@kubernetesio @bretmcg @_askcarter
Labels
selector: “version=v1”
Pod
hello
Pod
hello
labels:
version: v1
track: stable
labels:
version: v1
track: test
64@kubernetesio @bretmcg @_askcarter
Labels
selector: “track=stable”
Pod
hello
Pod
hello
labels:
version: v1
track: stable
labels:
version: v1
track: test
65@kubernetesio @bretmcg @_askcarter
Services
Persistent Endpoint for Pods
• Use Labels to
Select Pods
• Internal or
External IPs
Node1 Node3Node2
Pod
hello
Service
Pod
hello
Pod
hello
6666
Lab
Creating and managing services
http://github.com/bretmcg/kubernetes-workshop
6767
Recap
68@kubernetesio @bretmcg @_askcarter
Kubernetes
Manage applications, not machines
Open source, container orchestrator
Supports multiple cloud and bare-metal
environments
Inspired and informed by Google’s
experiences and internal systems
69@kubernetesio @bretmcg @_askcarter
machine-1
machine-2
machine-3
frontend middleware backend
Physical Infrastructure
70@kubernetesio @bretmcg @_askcarter
frontend
middleware
backend
Kubernetes API: Unified Compute Substrate
Logical Infrastructure
71@kubernetesio @bretmcg @_askcarter
Goal: Write once, run anywhere*
Don’t force apps to know about concepts
that are cloud-provider-specific
Examples of this:
● Network model
● Ingress
● Service load-balancers
● PersistentVolumes
* approximately
Workload Portability
72@kubernetesio @bretmcg @_askcarter
Top 0.01% of all
GitHub projects
1200+ external
projects based on
k8s
Companies
Contributing
Companies
Using
690+
unique contributors
Community
73@kubernetesio @bretmcg @_askcarter
Pods
Logical Application
• One or more containers
and volumes
• Shared namespaces
• One IP per pod
Pod
nginx
monolith
NFSiSCSIGCE
10.10.1.100
74@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Hey, app v1... You alive?
Node
Kubelet Pod
app v1app v1
75@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcd
API
Server
Node
Kubeletsecret
$ kubectl create secret generic tls-certs --from-file=tls/
76@kubernetesio @bretmcg @_askcarter
Services
Persistent Endpoint for Pods
• Use Labels to
Select Pods
• Internal or
External IPs
Node1 Node3Node2
Pod
hello
Service
Pod
hello
Pod
hello
77@kubernetesio @bretmcg @_askcarter
Labels
Arbitrary meta-data attached
to Kubernetes object
Pod
hello
Pod
hello
labels:
version: v1
track: stable
labels:
version: v1
track: test
Kubernetes in Production
7979
Deployments
80@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 1
81@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
82@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
Pod
hello
Pod
hello
83@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
Pod
hello
84@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
Pod
hello
Pod
hello
85@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
Pod
hello
Pod
hello
Pod
hello
86@kubernetesio @bretmcg @_askcarter
Drive current state towards desired state
Deployments
Node1 Node2 Node3
Pod
hello
app: hello
replicas: 3
Pod
hello
Pod
hello
8787
Lab
Creating and managing deployments
http://github.com/bretmcg/kubernetes-workshop
8888
Rolling Updates
89@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghost
Pod
app v1
Service
ghost
Pod
app v1
Pod
app v1
90@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghost
Pod
app v1
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
91@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghost
Pod
app v1
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
92@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghost
Pod
app v1
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
93@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
94@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
Pod
app v2
95@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
Pod
app v2
96@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Pod
app v1
Pod
app v1
Pod
app v2
Pod
app v2
97@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Pod
app v1
Pod
app v2
Pod
app v2
98@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Pod
app v1
Pod
app v2
Pod
app v2
Pod
app v2
99@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Pod
app v1
Pod
app v2
Pod
app v2
Pod
app v2
100@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Pod
app v1
Pod
app v2
Pod
app v2
Pod
app v2
101@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Pod
app v2
Pod
app v2
Pod
app v2
102102
Lab
Rolling out updates
http://github.com/bretmcg/kubernetes-workshop
103103
Implementing a CI/CD Pipeline on K8s
104@kubernetesio @bretmcg @_askcarter
1. Check in code
2. Build an Image
3. Test Image
4. Push Image to registry
5. Apply change to manifest files
Automating Deployments
105105
Lab
Implementing a CI/CD Pipeline on Kubernetes
https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes
Thank you!
kubernetes.io
@bretmcg @_askcarter

More Related Content

Kubernetes 101 Workshop