SlideShare a Scribd company logo
Scaling Jenkins with Kubernetes 1.2
About me
Senior DevOps Engineer @ Glide
rubygems.org/profiles/amimahloof
Contribution to OpenSource via RubyGems and GitHub:
github.com/innovia
linkedin.com/in/amimahloof
Glide environment
Multiple Redis Servers
MySQL
DynamoDB
Multiple Background Queue Processors
App Server
Multiple Web Socket Servers
CloudSearch
Previous Jenkins setup
Single Jenkins Master machine for Server code
builds
Single Jenkins Master Machine for Android
client builds
Dedicated MySQL database per build
Dedicated port per build service
Each build environment encapsulated and
managed by Eye - process monitoring
https://github.com/kostya/eye
Previous Jenkins Issues
Under-utilized instances running 24/7 (expensive!)
Port collisions
Out of memory issues - build failures
Databases needed to be created and wiped per build
Debugging failed tests was extremely hard
Updating Jenkins ec2 image especially for Android was a
challenge
NFS-SERVER
POD EBS Volume
Scaling Jenkins with Kubernetes
Jenkins Master
BackEnd RC
Jenkins Master
Android RC
Backend
POD
Jenkins Ingress
Controller
POD
nginx configmap
Jenkins
Slave
POD
With KubeCtl
Backend Service (IP)
single elb
for both CI’s
addresses
Current Jenkins Kubernetes Plugin
https://github.com/jenkinsci/kubernetes-plugin
Support single docker image
(jenkins-slave docker image - it create’s a POD with that
image)
Does not support multiple containers in a POD
Does not support Persistent Volumes
Modified Jenkins Kubernetes Plugin
https://github.com/innovia/kubernetes-plugin
Support reading and parsing a POD template file
Support multiple containers in a POD
Support Persistant Volumes, EBS, EmptyDir, HostPath Volum
Does not support POD template per JOB
(System global config)
Read POD
Template
Start Build
Send POD
Object
Jenkins
Slave
POD
download JNLP file from
Jenkins Master
Connect to
Jenkins Master Slave Port
Build Flow Part 1
Pull Request
Build Flow Part 2
Jenkins Master Jenkins Slave
Start Job on Slave
git fetch and merge server code
create service
JOB_NAME-BUILD_NUMBER
create PV - PVC
JOB_NAME-BUILD_NUMBER
start tests
teardown
POD, SERVICE, PV, PVC
clean-up temp location
submit result to bitbucket
create POD
JOB_NAME-BUILD_NUMBER
wait for setup complete file
(check dependancies)
Advantages
On Demand PODs
Complete test isolation
Source controlled by developers flexible POD template
Jobs queued up in Kubernetes Scheduler until
node resources are available (running builds finish)
Scalable Kubernetes Nodes
Plugin keeps track of POD - if POD dies in the middle
it launches up a new slave for that job
Persistant Storage (EBS via NFS)
Spot instances using node selectors
Kubernetes on Spot Instances
Save 70-90% over On Demand
Managed by Spot Fleet to reduce downtime
Different fleets for different types of Kubernetes Nodes, allowing dynamic
pod allocation using Node Selector
Until implemented in Kubernetes 1.3, we have to create our own Fleets
eg.
https://github.com/kubernetes/kubernetes/issues/24472#issuecommen
t-211975112
Thank you!
Questions?
Slides available
http://www.slideshare.net/AmiMahloof/scaling-jenkins-with-kubernetes

More Related Content

Scaling jenkins with kubernetes

  • 1. Scaling Jenkins with Kubernetes 1.2
  • 2. About me Senior DevOps Engineer @ Glide rubygems.org/profiles/amimahloof Contribution to OpenSource via RubyGems and GitHub: github.com/innovia linkedin.com/in/amimahloof
  • 3. Glide environment Multiple Redis Servers MySQL DynamoDB Multiple Background Queue Processors App Server Multiple Web Socket Servers CloudSearch
  • 4. Previous Jenkins setup Single Jenkins Master machine for Server code builds Single Jenkins Master Machine for Android client builds Dedicated MySQL database per build Dedicated port per build service Each build environment encapsulated and managed by Eye - process monitoring https://github.com/kostya/eye
  • 5. Previous Jenkins Issues Under-utilized instances running 24/7 (expensive!) Port collisions Out of memory issues - build failures Databases needed to be created and wiped per build Debugging failed tests was extremely hard Updating Jenkins ec2 image especially for Android was a challenge
  • 6. NFS-SERVER POD EBS Volume Scaling Jenkins with Kubernetes Jenkins Master BackEnd RC Jenkins Master Android RC Backend POD Jenkins Ingress Controller POD nginx configmap Jenkins Slave POD With KubeCtl Backend Service (IP) single elb for both CI’s addresses
  • 7. Current Jenkins Kubernetes Plugin https://github.com/jenkinsci/kubernetes-plugin Support single docker image (jenkins-slave docker image - it create’s a POD with that image) Does not support multiple containers in a POD Does not support Persistent Volumes
  • 8. Modified Jenkins Kubernetes Plugin https://github.com/innovia/kubernetes-plugin Support reading and parsing a POD template file Support multiple containers in a POD Support Persistant Volumes, EBS, EmptyDir, HostPath Volum Does not support POD template per JOB (System global config)
  • 9. Read POD Template Start Build Send POD Object Jenkins Slave POD download JNLP file from Jenkins Master Connect to Jenkins Master Slave Port Build Flow Part 1 Pull Request
  • 10. Build Flow Part 2 Jenkins Master Jenkins Slave Start Job on Slave git fetch and merge server code create service JOB_NAME-BUILD_NUMBER create PV - PVC JOB_NAME-BUILD_NUMBER start tests teardown POD, SERVICE, PV, PVC clean-up temp location submit result to bitbucket create POD JOB_NAME-BUILD_NUMBER wait for setup complete file (check dependancies)
  • 11. Advantages On Demand PODs Complete test isolation Source controlled by developers flexible POD template Jobs queued up in Kubernetes Scheduler until node resources are available (running builds finish) Scalable Kubernetes Nodes Plugin keeps track of POD - if POD dies in the middle it launches up a new slave for that job Persistant Storage (EBS via NFS) Spot instances using node selectors
  • 12. Kubernetes on Spot Instances Save 70-90% over On Demand Managed by Spot Fleet to reduce downtime Different fleets for different types of Kubernetes Nodes, allowing dynamic pod allocation using Node Selector Until implemented in Kubernetes 1.3, we have to create our own Fleets eg. https://github.com/kubernetes/kubernetes/issues/24472#issuecommen t-211975112