SlideShare a Scribd company logo
The Automated Monolith
Marco Seifried (@marcoseifried)
Tora Onaca (ro.linkedin.com/in/toravasilescu)
Paul Vidrascu (ro.linkedin.com/in/paulvidrascu)
dev.haufe-lexware.com
github.com/Haufe-Lexware
@HaufeDev
Build, Deploy and Testing using Docker, Docker Compose, Docker Machine, go.cd and Azure
http://devops.com/wp-content/uploads/2014/10/2011.09.18_code_reviews.png
No need to double
check this change list,
if some problems
remain the reviewer
will catch them
No need to look at
this change list to
closely, I‘m sure the
author knows what
he‘s doing
Haufe Strategy - Architecture Principles
Business value over technical strategy
Strategic goals over project-specific benefits
Composability over silos
Shared services over specific-purpose implementations
Evolutionary refinement over pursuit of initial perfection
Design for obsoleteness over building for eternity
Good enough over best of breed
Declarative processes over implicit knowledge
What do we want to achieve?
Speed
Business value in
manual deployments?
Reduce Error
Prepare for change
How?
Container
Infrastructure As Code
Immutable Servers
Blue / Green Deployments
Project and Approach
Features: User-, License- &
Subscription Management
Goal: Microservices, fully
automated deployment, APIs
Start: Fully automated
deployment
First iteration: Automated test
environment per client on
Azure
How – Automation Tools and Technologies
Continuous Delivery // pipelines to run scripts // create docker images
// deploy onto Azure
Bitbucket – repository for config
Deploy in Azure
Artifactory - Internal docker repository for images
Run anywhere, describe everything in dockerfiles
How – Automation Components and Flow
Break down – step 0
Independent on the pipeline
execution, every time an artifact
is built in TFS it is then pushed in
the Haufe Artifactory.
Step 1 – building the Docker images
A new commit in the BitBucket
repository triggers the pipeline.
The pipeline contains steps for
building all the needed images
(app server, db server, JMS server,
logging, monitoring, test)
Step 1 – building the Docker images (cont.)
Logging
Docker logs not enough
Alternative – docker exec and accessing the log files
Map external volume and copy logs there
Best alternative: use what everyone else does and works like a charm: Kibana + ElasticSearch + Fluentd
Monitoring
See the status of the containers using
Cadvisor
influxDb
Grafana
Step 2 – Deploy and Run
Summary
Two months effort to get to Azure, distributed teams across two countries
Creation of dev environment reduced from one week to 30 minutes
Docker is constantly improving (improved networking, built in drivers, etc)
Baseline for future improvements
What’s next:
Move this along to production
Allow clients to choose the version of the images to use
Improve some startup times
Try out different cloud solutions
Multumim
Thank You

More Related Content

The Automated Monolith

  • 1. The Automated Monolith Marco Seifried (@marcoseifried) Tora Onaca (ro.linkedin.com/in/toravasilescu) Paul Vidrascu (ro.linkedin.com/in/paulvidrascu) dev.haufe-lexware.com github.com/Haufe-Lexware @HaufeDev Build, Deploy and Testing using Docker, Docker Compose, Docker Machine, go.cd and Azure
  • 2. http://devops.com/wp-content/uploads/2014/10/2011.09.18_code_reviews.png No need to double check this change list, if some problems remain the reviewer will catch them No need to look at this change list to closely, I‘m sure the author knows what he‘s doing
  • 3. Haufe Strategy - Architecture Principles Business value over technical strategy Strategic goals over project-specific benefits Composability over silos Shared services over specific-purpose implementations Evolutionary refinement over pursuit of initial perfection Design for obsoleteness over building for eternity Good enough over best of breed Declarative processes over implicit knowledge
  • 4. What do we want to achieve? Speed Business value in manual deployments? Reduce Error Prepare for change
  • 5. How? Container Infrastructure As Code Immutable Servers Blue / Green Deployments
  • 6. Project and Approach Features: User-, License- & Subscription Management Goal: Microservices, fully automated deployment, APIs Start: Fully automated deployment First iteration: Automated test environment per client on Azure
  • 7. How – Automation Tools and Technologies Continuous Delivery // pipelines to run scripts // create docker images // deploy onto Azure Bitbucket – repository for config Deploy in Azure Artifactory - Internal docker repository for images Run anywhere, describe everything in dockerfiles
  • 8. How – Automation Components and Flow
  • 9. Break down – step 0 Independent on the pipeline execution, every time an artifact is built in TFS it is then pushed in the Haufe Artifactory.
  • 10. Step 1 – building the Docker images A new commit in the BitBucket repository triggers the pipeline. The pipeline contains steps for building all the needed images (app server, db server, JMS server, logging, monitoring, test)
  • 11. Step 1 – building the Docker images (cont.) Logging Docker logs not enough Alternative – docker exec and accessing the log files Map external volume and copy logs there Best alternative: use what everyone else does and works like a charm: Kibana + ElasticSearch + Fluentd Monitoring See the status of the containers using Cadvisor influxDb Grafana
  • 12. Step 2 – Deploy and Run
  • 13. Summary Two months effort to get to Azure, distributed teams across two countries Creation of dev environment reduced from one week to 30 minutes Docker is constantly improving (improved networking, built in drivers, etc) Baseline for future improvements What’s next: Move this along to production Allow clients to choose the version of the images to use Improve some startup times Try out different cloud solutions

Editor's Notes

  1. One of the reasons considering automation
  2. We spoke about this in the previous presentation about API Management Breaking down Declarative processes over implicit knowledge: Have automation wherever you can
  3. So we have the technical reasoning behind it (technology strategy), but what for? Why should we spend money on this, what‘s in it? Be prepared for change: For example. how to be hosting agnostic?
  4. Containers give us the possiblity to run in house, to run at hosters, to run in the cloud Infrastructure as a code: no more implicit knowledge in the head of people. Treat it as code. Build up from scratch -> no updates, no hot-fixes, no manual changes -> immutable servers, infrastructure as code
  5. Approach: Not all at once, nothing comes for free BUT – it took us not long to get to the first iteration
  6. What‘s go.cd, what‘s artifactory, bitbucket – all those are there. The rest isn‘t and gets automatically created Idea of using pipelines, explain high level each pipeline – how are they triggered, … This can be the base for a lot of things – but it‘s one fully automated flow. What do we plan to do next?
  7. Easy step – whenever a new artifact is built in TFS it is pushed in the Haufe artifactory Show in browser
  8. Show the actual scripts in the browser Open BitBucket, go.cd and the Haufe registry
  9. Things we get almost for free – logging and monitoring What does each do and show the log console and the monitoring site
  10. Once again, at every step show the browser with the pipeline go.cd linux agent runs docker commands (using the Docker plugin – which basically does what a build script would do – build the image, login in the artifactory, push it there) Docker compose is run in Azure (gets all the images from the Haufe artifactory) EFK HGSP + OpenAM Monitoring go.cd Windows agent runs power shell commands – in our case for opening the ports at the end, once the deployment environment is done, the test containers are run (docker compose – maven junit and SoapUI tests in Linux)