SlideShare a Scribd company logo
ContinuousContinuous
Delivery of CloudDelivery of Cloud
ApplicationsApplications
with Docker Containers
and IBM Bluemix
Florian Georg
IBM Cloud Advisor
@florian_georg | #ibmcloudadvisor
florian.georg@ch.ibm.com
IBM CloudIBM Cloud
AdvisorsAdvisors
World-wide Expert Team
"All things Cloud"
Call us, we won't charge you
Continuous Delivery,Continuous Delivery,
Containers,Containers,
PaaSPaaS
Benefits ofBenefits of
ContinuousContinuous
DeliveryDelivery
Validated Learning
build the right product
Increase stability, reduce
risk
Real project progress
(done-done)
Deliver SoftwareDeliver Software
Dark Launching
Canary Releases
Feature Toggles
Green/Blue Deployments
...
Why Containers ?Why Containers ?
Portable Workloads (Workstation to Cloud)
Proven technology, lightweight, fast, easy to use
Separation of Concerns + common interfaces
Code
Processes
Package Managers
...
Lifecycle Management
Logging
Monitoring
Networking
Dev Ops
Linux-based, isolated
instances
Common commands for all
types of images / containers
Why a CloudWhy a Cloud
PaaS ?PaaS ?
Build software, not tech stacks
Security + Speed
Elastic scaling and billing
Compose applications services
Enable modern Architecture
Patterns (Microservices etc.)
"The entire history of
is that of the rise in
levels of abstraction."
- Grady Booch
software
engineering
https://developer.ibm.com/bluemix/docs/actionable-architecture-building-web-application-hosting-containers/
http://www.cloud-council.org/
Web App Reference ArchitectureWeb App Reference Architecture
CloudCloud
FoundryFoundry
OverviewOverview
World's largest public
CloudFoundry instance
IBM invested > 1 Billion$
100+ Services
(IBM / 3rd Party)
Significant free tiers - e.g. run
small apps completely free,
forever
http://bluemix.nethttp://bluemix.net
Continuous Delivery of Cloud Applications with Docker Containers and IBM Bluemix
Continuous Delivery of Cloud Applications with Docker Containers and IBM Bluemix
IBM ContainerIBM Container
ServicesServices
IBM Containers (beta)IBM Containers (beta)
Private Docker Registry
Trusted Images (Node, Java Liberty)
Container Groups (scale, failover)
Monitoring, logging (logstash)
Run on Bare-Metal Servers (latency)
Public IPs
Bind to Bluemix CloudFoundry PaaS
Services
http://ice.mybluemix.net/
Continuous Delivery of Cloud Applications with Docker Containers and IBM Bluemix
Web DashboardWeb Dashboard
Scoping according to CloudFoundry concepts (org/space)
Manage containers/container groups through dashboard
Continuous Delivery of Cloud Applications with Docker Containers and IBM Bluemix
Continuous Delivery of Cloud Applications with Docker Containers and IBM Bluemix
"ice""ice" Command Line toolCommand Line tool
Install & usage docs:
https://www.ng.bluemix.net/docs/starters/container_cli_ov.html#container_cli_ov
Win, Linux, Mac
some prereqs
Python 2.7
CloudFoundry
CLI ("cf")
Docker
# install latest version (3.0) of CLI
> sudo pip install https://static-ice.ng.bluemix.net/icecli-3.0.zip
[...]
> ice info
Date/Time : 2015-06-17 11:43:30.491651
Debug mode : False
CCS host/url : https://api-ice.ng.bluemix.net/v2/containers
Registry host : registry-ice.ng.bluemix.net
Bluemix api host/url : https://api.ng.bluemix.net
Bluemix Org : Florian Georg - demo (f36d3cd8-70ba-4d96-a571-f46a451bdaaf)
Bluemix Space : dev (291b7833-0fb5-484e-95af-5bfc0e252080)
ICE CLI Version : 3.0 481 2015-06-16T21:08:26
CCS API Version : 2.0 1120 2015-06-16T16:52:41
Repository namespace : faxg
Containers limit : Unlimited
Containers usage : 5
Containers running : 4
CPU limit (cores) : Unlimited
CPU usage (cores) : 6
Memory limit (MB) : 2048
Memory usage (MB) : 1536
Floating IPs limit : 2
Floating IPs allocated : 1
Floating IPs bound : 1
LoginLogin
# login and (optionally) select a target space for running containers
> ice login --space dev
[...]
# Set repository namespace. This needs to be done only once
> ice namespace set myNamespace
Authenticate to cloud service + docker registry
SSO with your Bluemix account
CloudFoundry Scoping Model
Image registry per "Organization"
Containers run in "Spaces"
​You must create & set a unique repository namespace once
"ice namespace set <myNamespace>"
Pull Docker ImagePull Docker Image
# pull image from remote registry
> ice --local pull registry-ice.ng.bluemix.net/ibmnode
# you could also copy from public registry like dockerhub (ice >= 3.0)
> ice cpi ansi/mosquitto registry-ice.ng.bluemix.net/faxg/mosquitto
Image naming:
registry-ice.ng.bluemix.net/<trustedImageName>​
registry-ice.ng.bluemix.net/<namespace>/<imageName>
(public trusted images currently 'ibmnode' and 'ibmliberty')
"ice cpi" is combined pull - tag - push
Change, Commit & Tag ImageChange, Commit & Tag Image
# run a container and make some changes
> ice --local run registry-ice.ng.bluemix.net/ibmnode apt-get update -y
[...]
# commit changes to container (id #1234...) into a new (local) image
> ice --local commit 1234 myAppImage
# tag the new image with into your remote repository
> ice --local tag myAppImage registry-ice.ng.bluemix.net/myNamespace/myAppImage
Before pushing images to your remote repository, you must tag them
like this:
ice --local tag <localImageName> registry-ice.ng.bl
uemix.net/<Namespace>/<remoteImageName>
Push image & run in the CloudPush image & run in the Cloud
# push the tagged image to your remote repository on Bluemix
> ice --local push registry-ice.ng.bluemix.net/myNamespace/myAppImage
# run a new container on the cloud
> ice run --name myAppContainer --memory 256 myNamespace/myAppImage echo "Some command"
# retrieve stdout / logs
> ice logs myAppContainer
Containers have "t-shirt sizes" (small, medium, large...)
combination of memory and local storage
Specify --memory <MB> from CLI, or use web console
Request & Bind Public IPRequest & Bind Public IP
# request a public IP
> ice ip request
Successfully obtained ip: "129.41.232.25"
# bind floating IP to container
> ice ip bind 129.41.232.25 myAppContainer
You have a quota of public IP adresses
Unbound IPs are "floating" in your pool until released
Containers also have a private IP on the same network
Create & mount storageCreate & mount storage
> ice volume create myStorage
> ice run --name containerWithVolume --volume myStorage:/var/images myNamespace/myAppImage
Permanent storage (as opposed to container "storage" !)
does not get deleted with container
must be mounted on container startup ("run")
Bind to CloudFoundry ServicesBind to CloudFoundry Services
> ice run --name myAppContainer --bind myDB myNamespace/myAppImage
Platform injects VCAP_SERVICES environment variable
Use Bluemix Services in your container
You need a CloudFoundry "proxy app" that binds to the services
Create Container GroupsCreate Container Groups
# Create a load-balancing container group (open Port 80, auto-recovery, 2 containers)
> ice group create -p 80 --auto --desire 2 --name myGroup myNamespace/myImageName
# create a route (http://groupRoute.mybluemix.net) for this container group
> ice route map --hostname groupRoute --domain mybluemix.net myGroup
Container groups load-balance incoming traffic to a set of
containers
Restart crashed containers with --auto
--desire <numContainers>
Map external URL route
Key FeaturesKey Features
Private registry w/ access controls
Push / Pull images between on-prem and off-prem registries
Docker CLI compatible (reuse existing tools)
Load-balance + auto-recovery (container incl. running services)
Easy public IP binding
Container-level logging & monitoring
Container-level private, secure networking (IP-based)
Container-level attachable storage
Bind containers to Bluemix PaaS Services
Integrated CI/CD + operation lifecycle
Integration with Bluemix Router (DNS, load-balance etc.)
Delivery PipelineDelivery Pipeline
with Green/Blue(*) Deployments
(*) called "red_black" in our scripts
IBM DevOps Services (IDS)IBM DevOps Services (IDS)
Delivery PipelineDelivery Pipeline
Cloud IDE (Code, Build, Tracking):Cloud IDE (Code, Build, Tracking):
http://hub.jazz.nethttp://hub.jazz.net
Stage 1: BuildStage 1: Build
Triggered on push to connected Git repo/branch
You need a Dockerfile
You provide the imageName to use
Tags the new image into your private Docker repository
Version tag will be set to the build# (from Jenkins)
(you would want to add some unit testing job here)
# FULL_REPOSITORY_NAME is something like
# "ice-registry.ng.bluemix.net/<Namespace>/<imageName>:<version>"
ice build --pull --tag ${FULL_REPOSITORY_NAME} ${WORKSPACE}
Pseudocode
Stage 2: StagingStage 2: Staging
Triggered on successful "Build" stage
Gets artifacts + variables from build stage
Clones deployscripts from external repo
Clean deploy as single container into staging space
Updates internal IDS inventory
# Clone deployscripts into workspace
git_retry clone https://github.com/Osthanes/deployscripts.git deployscripts
# Deploy "clean" as single container into staging space
# /bin/bash deployscripts/deploycontainer.sh :
# clean up previous containers
ice rm -f ${CONTAINER_NAME}_${PREVIOUS_VERSION}
# start new container
ice run --name ${CONTAINER_NAME}_${VERSION} ${ICE_ARGS} ${IMAGE_NAME}
wait_for_startup()
# reclaim floating IP & bind to current container version
ice ip unbind ${FLOATING_IP} ${CONTAINER_NAME}_${PREVIOUS_VERSION}
ice ip bind ${FLOATING_IP} ${CONTAINER_NAME}_${VERSION}
# update IDS inventory
[...] Pseudocode
Stage 3: ProductionStage 3: Production
Triggered manually or auto
starts a new container group
zero downtime "cut-over" by re-mapping URL route
You may keep some "old" groups running for rollback
# Clone deployscripts into workspace
git_retry clone https://github.com/Osthanes/deployscripts.git deployscripts
# Deploy "red_black" as container group into production space
# /bin/bash deployscripts/deploygroup.sh :
# create new container group & wait until up
ice group create --name ${GROUP_NAME}_${VERSION} ${PARAMS} ${IMAGE_NAME}
wait_for_startup()
# map DNS route to new group, zero downtime cut-over
ice route map --hostname $HOSTNAME --domain $DOMAIN ${GROUP_NAME}_${VERSION}
# clean up version-2 container group
ice group rm -f ${GROUP_NAME}_${VERSION - 2}
# update IDS inventory
[...]
Pseudocode
DemoDemo(*)(*)
(*) https://www.youtube.com/watch?v=5NRHVtguODM
Confused aboutConfused about
"The Cloud" ?"The Cloud" ?

More Related Content

Continuous Delivery of Cloud Applications with Docker Containers and IBM Bluemix

  • 1. ContinuousContinuous Delivery of CloudDelivery of Cloud ApplicationsApplications with Docker Containers and IBM Bluemix Florian Georg IBM Cloud Advisor @florian_georg | #ibmcloudadvisor florian.georg@ch.ibm.com
  • 2. IBM CloudIBM Cloud AdvisorsAdvisors World-wide Expert Team "All things Cloud" Call us, we won't charge you
  • 4. Benefits ofBenefits of ContinuousContinuous DeliveryDelivery Validated Learning build the right product Increase stability, reduce risk Real project progress (done-done)
  • 5. Deliver SoftwareDeliver Software Dark Launching Canary Releases Feature Toggles Green/Blue Deployments ...
  • 6. Why Containers ?Why Containers ? Portable Workloads (Workstation to Cloud) Proven technology, lightweight, fast, easy to use Separation of Concerns + common interfaces Code Processes Package Managers ... Lifecycle Management Logging Monitoring Networking Dev Ops Linux-based, isolated instances Common commands for all types of images / containers
  • 7. Why a CloudWhy a Cloud PaaS ?PaaS ? Build software, not tech stacks Security + Speed Elastic scaling and billing Compose applications services Enable modern Architecture Patterns (Microservices etc.) "The entire history of is that of the rise in levels of abstraction." - Grady Booch software engineering
  • 10. World's largest public CloudFoundry instance IBM invested > 1 Billion$ 100+ Services (IBM / 3rd Party) Significant free tiers - e.g. run small apps completely free, forever http://bluemix.nethttp://bluemix.net
  • 14. IBM Containers (beta)IBM Containers (beta) Private Docker Registry Trusted Images (Node, Java Liberty) Container Groups (scale, failover) Monitoring, logging (logstash) Run on Bare-Metal Servers (latency) Public IPs Bind to Bluemix CloudFoundry PaaS Services http://ice.mybluemix.net/
  • 16. Web DashboardWeb Dashboard Scoping according to CloudFoundry concepts (org/space) Manage containers/container groups through dashboard
  • 19. "ice""ice" Command Line toolCommand Line tool Install & usage docs: https://www.ng.bluemix.net/docs/starters/container_cli_ov.html#container_cli_ov Win, Linux, Mac some prereqs Python 2.7 CloudFoundry CLI ("cf") Docker # install latest version (3.0) of CLI > sudo pip install https://static-ice.ng.bluemix.net/icecli-3.0.zip [...] > ice info Date/Time : 2015-06-17 11:43:30.491651 Debug mode : False CCS host/url : https://api-ice.ng.bluemix.net/v2/containers Registry host : registry-ice.ng.bluemix.net Bluemix api host/url : https://api.ng.bluemix.net Bluemix Org : Florian Georg - demo (f36d3cd8-70ba-4d96-a571-f46a451bdaaf) Bluemix Space : dev (291b7833-0fb5-484e-95af-5bfc0e252080) ICE CLI Version : 3.0 481 2015-06-16T21:08:26 CCS API Version : 2.0 1120 2015-06-16T16:52:41 Repository namespace : faxg Containers limit : Unlimited Containers usage : 5 Containers running : 4 CPU limit (cores) : Unlimited CPU usage (cores) : 6 Memory limit (MB) : 2048 Memory usage (MB) : 1536 Floating IPs limit : 2 Floating IPs allocated : 1 Floating IPs bound : 1
  • 20. LoginLogin # login and (optionally) select a target space for running containers > ice login --space dev [...] # Set repository namespace. This needs to be done only once > ice namespace set myNamespace Authenticate to cloud service + docker registry SSO with your Bluemix account CloudFoundry Scoping Model Image registry per "Organization" Containers run in "Spaces" ​You must create & set a unique repository namespace once "ice namespace set <myNamespace>"
  • 21. Pull Docker ImagePull Docker Image # pull image from remote registry > ice --local pull registry-ice.ng.bluemix.net/ibmnode # you could also copy from public registry like dockerhub (ice >= 3.0) > ice cpi ansi/mosquitto registry-ice.ng.bluemix.net/faxg/mosquitto Image naming: registry-ice.ng.bluemix.net/<trustedImageName>​ registry-ice.ng.bluemix.net/<namespace>/<imageName> (public trusted images currently 'ibmnode' and 'ibmliberty') "ice cpi" is combined pull - tag - push
  • 22. Change, Commit & Tag ImageChange, Commit & Tag Image # run a container and make some changes > ice --local run registry-ice.ng.bluemix.net/ibmnode apt-get update -y [...] # commit changes to container (id #1234...) into a new (local) image > ice --local commit 1234 myAppImage # tag the new image with into your remote repository > ice --local tag myAppImage registry-ice.ng.bluemix.net/myNamespace/myAppImage Before pushing images to your remote repository, you must tag them like this: ice --local tag <localImageName> registry-ice.ng.bl uemix.net/<Namespace>/<remoteImageName>
  • 23. Push image & run in the CloudPush image & run in the Cloud # push the tagged image to your remote repository on Bluemix > ice --local push registry-ice.ng.bluemix.net/myNamespace/myAppImage # run a new container on the cloud > ice run --name myAppContainer --memory 256 myNamespace/myAppImage echo "Some command" # retrieve stdout / logs > ice logs myAppContainer Containers have "t-shirt sizes" (small, medium, large...) combination of memory and local storage Specify --memory <MB> from CLI, or use web console
  • 24. Request & Bind Public IPRequest & Bind Public IP # request a public IP > ice ip request Successfully obtained ip: "129.41.232.25" # bind floating IP to container > ice ip bind 129.41.232.25 myAppContainer You have a quota of public IP adresses Unbound IPs are "floating" in your pool until released Containers also have a private IP on the same network
  • 25. Create & mount storageCreate & mount storage > ice volume create myStorage > ice run --name containerWithVolume --volume myStorage:/var/images myNamespace/myAppImage Permanent storage (as opposed to container "storage" !) does not get deleted with container must be mounted on container startup ("run")
  • 26. Bind to CloudFoundry ServicesBind to CloudFoundry Services > ice run --name myAppContainer --bind myDB myNamespace/myAppImage Platform injects VCAP_SERVICES environment variable Use Bluemix Services in your container You need a CloudFoundry "proxy app" that binds to the services
  • 27. Create Container GroupsCreate Container Groups # Create a load-balancing container group (open Port 80, auto-recovery, 2 containers) > ice group create -p 80 --auto --desire 2 --name myGroup myNamespace/myImageName # create a route (http://groupRoute.mybluemix.net) for this container group > ice route map --hostname groupRoute --domain mybluemix.net myGroup Container groups load-balance incoming traffic to a set of containers Restart crashed containers with --auto --desire <numContainers> Map external URL route
  • 28. Key FeaturesKey Features Private registry w/ access controls Push / Pull images between on-prem and off-prem registries Docker CLI compatible (reuse existing tools) Load-balance + auto-recovery (container incl. running services) Easy public IP binding Container-level logging & monitoring Container-level private, secure networking (IP-based) Container-level attachable storage Bind containers to Bluemix PaaS Services Integrated CI/CD + operation lifecycle Integration with Bluemix Router (DNS, load-balance etc.)
  • 29. Delivery PipelineDelivery Pipeline with Green/Blue(*) Deployments (*) called "red_black" in our scripts
  • 30. IBM DevOps Services (IDS)IBM DevOps Services (IDS) Delivery PipelineDelivery Pipeline Cloud IDE (Code, Build, Tracking):Cloud IDE (Code, Build, Tracking): http://hub.jazz.nethttp://hub.jazz.net
  • 31. Stage 1: BuildStage 1: Build Triggered on push to connected Git repo/branch You need a Dockerfile You provide the imageName to use Tags the new image into your private Docker repository Version tag will be set to the build# (from Jenkins) (you would want to add some unit testing job here) # FULL_REPOSITORY_NAME is something like # "ice-registry.ng.bluemix.net/<Namespace>/<imageName>:<version>" ice build --pull --tag ${FULL_REPOSITORY_NAME} ${WORKSPACE} Pseudocode
  • 32. Stage 2: StagingStage 2: Staging Triggered on successful "Build" stage Gets artifacts + variables from build stage Clones deployscripts from external repo Clean deploy as single container into staging space Updates internal IDS inventory # Clone deployscripts into workspace git_retry clone https://github.com/Osthanes/deployscripts.git deployscripts # Deploy "clean" as single container into staging space # /bin/bash deployscripts/deploycontainer.sh : # clean up previous containers ice rm -f ${CONTAINER_NAME}_${PREVIOUS_VERSION} # start new container ice run --name ${CONTAINER_NAME}_${VERSION} ${ICE_ARGS} ${IMAGE_NAME} wait_for_startup() # reclaim floating IP & bind to current container version ice ip unbind ${FLOATING_IP} ${CONTAINER_NAME}_${PREVIOUS_VERSION} ice ip bind ${FLOATING_IP} ${CONTAINER_NAME}_${VERSION} # update IDS inventory [...] Pseudocode
  • 33. Stage 3: ProductionStage 3: Production Triggered manually or auto starts a new container group zero downtime "cut-over" by re-mapping URL route You may keep some "old" groups running for rollback # Clone deployscripts into workspace git_retry clone https://github.com/Osthanes/deployscripts.git deployscripts # Deploy "red_black" as container group into production space # /bin/bash deployscripts/deploygroup.sh : # create new container group & wait until up ice group create --name ${GROUP_NAME}_${VERSION} ${PARAMS} ${IMAGE_NAME} wait_for_startup() # map DNS route to new group, zero downtime cut-over ice route map --hostname $HOSTNAME --domain $DOMAIN ${GROUP_NAME}_${VERSION} # clean up version-2 container group ice group rm -f ${GROUP_NAME}_${VERSION - 2} # update IDS inventory [...] Pseudocode
  • 35. Confused aboutConfused about "The Cloud" ?"The Cloud" ?