SlideShare a Scribd company logo
1
DevOps Enabling Your TeamJacob Aae Mikkelsen
Agenda
Goals
Infrastructure
Infrastructure as Code with Terraform
Docker Orchestration with Rancher
Deploying an App
2 . 1
Jacob Aae Mikkelsen
Senior Software Architect at Cardlay A/S
GR8Conf EU - Organizing Team Member
External Associate Professor - University of Southern Denmark
Groovy Ecosystem Nerd
@JacobAae
Blogs The Grails Diary
2 . 22 . 3
Disclaimer
Some knowledge of infrastructure is assumed
Goals
De ne a cloud based infrastructure
as code
Spin up the infrastructure from scratch
Docker Orchestration tool
Deploy a demo app
3 . 13 . 2
Motivation
Devops
Devops de nition
— theagileadmin.com/what-is-devops
DevOps is the practice of operations and development engineers participating
together in the entire service lifecycle, from design through the development
process to production support.
3 . 33 . 4
Devops culture
Primary corollary
— theagileadmin.com/what-is-devops
DevOps is also characterized by operations staff making use many of the same
techniques as developers for their systems work.
3 . 5
Reality
In reality, silos exist in IT and DevOps teams are interested in tearing down those silos.
Enable development team to handle deployment and con guration themselves
Requirement
We need to understand the layers below our code
Hosting platform
Domain Names Administration (DNS)
Network
Automation of infrastructure
3 . 64 . 1
Infrastructure
4 . 2
Requirement
AWS VPC with hosts that can serve docker images Orchestrated by a Rancher Server
DevOps Enabling Your Team
DevOps Enabling Your Team
4 . 3
Provisioning
Single machine
Single application server
Single database server
Tools
Ansible
Chef
Puppet
4 . 4
Orchestration
Making all the singles mingle
Connects the applikation server to a valid database server
Networking
Service discovery
Tools
Terraform
CloudFormation
4 . 5
Application Orchestration
5 . 15 . 2
Rancher
— Rancher Website
Rancher Labs develops open source software that makes it easy to deploy and
manage Docker containers and Kubernetes in production on any infrastructure.
Rancher Hosts
5 . 3
6 . 16 . 2
Terraform
— Terraform website
Terraform is a tool for building, changing, and versioning infrastructure safely
and ef ciently.
Key Features
Infrastructure as Code
Execution Plans
Resource Graph
Change Automation
6 . 3
We get
Repeatable
Versioned
Documented
Automated
Testable
Shareable
Handling of infrastructure
6 . 4
How it Works
Terraform creates a D-A-G of tasks
6 . 56 . 6
Con guration
The set of les used to describe infrastructure in Terraform is simply known as a Terraform con guration.
Con guration
HashiCorp Con guration Language (HCL)
Less verbose than JSON
More concise than YAML
Restricted subset (compared to programming language)
Any tool can also accept JSON
Allows comments
6 . 76 . 8
Key components
Providers
Resourses
Provisioners
6 . 9
Providers
Account details for fx AWS
6 . 10
Resourses
Logical representation of "Physical" resource (physical item, even though it is a virtual server)
De nes the desired state of a resource
6 . 11
Provisioners
Post-creation "initialization" of resource
Has access to the current properties of a resource
6 . 12
Terraform les
All .tf /.tf.json les in working directory are loaded and appended (in alphabetical order)
Duplicate resources not allowed
6 . 13
Modules
Modules are reusable components that can take be con gured with inputs and deliver output for use in
other scripts
7 . 1
Sample Infrastructure
Structure
.
├── modules
│   └── host-workers
│   ├── workers.tf
│   ├── inputs.tf
│   └── outputs.tf
└── terraform
├── aws.tf
├── cluster.tf
├── dns.tf
├── terraform.tf
├── variables.tf
└── vpc.tf
7 . 27 . 3
Structure comments
modules: Shared code, here describing the con guration of a rancher host machine
terraform: Terraform les for the cluster we are trying to create
7 . 4
aws.tf
provider "aws" {
region = "eu-west-1"
profile = "gr8conf"
// access_key = "${var.aws_access_key}"
// secret_key = "${var.aws_secret_key}"
}
variables.tf (1)
variable "name" {
description = "The name given to the cluster environment "
default = "gr8conf"
}
variable "vpc_cidr" {
description = "The network CIDR."
default = "172.16.0.0/16"
}
7 . 5
variables.tf (2)
variable "cidrs_public_subnets" {
description = "The CIDR ranges for public subnets. "
default = ["172.16.0.0/24", "172.16.1.0/24", "172.16.2.0/24"]
}
variable "cidrs_private_subnet" {
description = "CIDRs for private subnets. "
default = ["172.16.128.0/24", "172.16.129.0/24", "172.16.130.0/24"]
}
7 . 6
vpc.tf (1)
module "vpc" {
source = "github.com/terraform-community-modules/tf_aws_vpc "
name = "${var.name}-vpc"
cidr = "${var.vpc_cidr}"
private_subnets = "${var.cidrs_private_subnet }"
public_subnets = "${var.cidrs_public_subnets }"
enable_dns_hostnames = true
enable_dns_support = true
azs = [ "eu-west-1a", "eu-west-1b", "eu-west-1c"]
enable_nat_gateway = "true"
tags {
"Terraform" = "true"
"Environment" = "GR8Conf"
}
}
7 . 7
vpc.tf (2)
resource "aws_security_group" "vpc_sg_within" {
name_prefix = "${var.name}"
vpc_id = "${module.vpc.vpc_id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
self = true }
egress {
from_port = 0
to_port = 0
protocol = "-1"
self = true }
lifecycle {
create_before_destroy = true }
}
7 . 8
vpc.tf (3)
resource "aws_security_group" "web_sg" {
name_prefix = "${var.name}"
vpc_id = "${module.vpc.vpc_id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0" // Restrict IP range access here
]
}
}
7 . 97 . 10
vpc.tf (4)
resource "aws_eip" "nat" {
vpc = true
}
resource "aws_nat_gateway" "nat" {
allocation_id = "${aws_eip.nat.id}"
subnet_id = "${element(module.vpc.public_subnets, 0)}"
}
8 . 1
Host Worker module
inputs.tf (1)
variable "host_ami" {
description = "Ami for the host"
default = "ami-75cbcb13" // rancheros-v1.0.1-hvm-1
}
variable "host_instance_type" {
description = "The instance type for the hosts "
default = "t2.micro"
}
variable "vpc_id" {
description = "The VPC to launch resources in "
}
variable "name" {
description = "The cluster name"
}
8 . 2
inputs.tf (2)
variable "host_subnet_ids" {
description = "The subnets to launch the hosts in "
}
variable "host_security_group_ids" {
description = "Additional security groups to apply to hosts "
default = ""
}
variable "host_root_volume_size" {
description = "The size of the root EBS volume in GB "
default = 24
}
variable "loadbalancer_ids" {
description = "The loadbalancers to attach to the auto scaling group "
default = ""
}
8 . 3
inputs.tf (3)
variable "rancher_image" {
description = "The Docker image to run Rancher from "
default = "rancher/agent:v1.2.2"
}
variable "rancher_server_url" {
description = "The URL for the Rancher server (including the version, i.e. rancher.gr8conf.org/v1) "
}
variable "rancher_env_token" {
description = "The Rancher environment token hosts will join rancher with "
}
variable "rancher_host_labels" {
description = "Comma separate k=v labels to apply to all rancher hosts "
default = ""
}
8 . 4
inputs.tf (4)
variable "min_host_capacity" {
description = "The miminum capacity for the auto scaling group "
default = 1
}
variable "max_host_capacity" {
description = "The maximum capacity for the auto scaling group "
default = 4
}
variable "desired_host_capacity" {
description = "The desired capacity for the auto scaling group "
default = 1
}
8 . 5
inputs.tf (5)
variable "host_health_check_type" {
description = "Whether to use EC2 or ELB healthchecks in the ELB "
default = "EC2"
}
variable "host_health_check_grace_period " {
description = "The grace period for autoscaling group health checks "
default = 300
}
8 . 68 . 7
inputs.tf (6)
variable "host_profile" {
description = "The IAM profile to assign to the instances "
default = ""
}
variable "host_key_name" {
description = "The EC2 KeyPair to use for the machine "
}
8 . 8
outputs.tf
output "hosts_security_group" {
value = "${aws_security_group.worker_sg.id }"
}
workers.tf (1)
resource "aws_launch_configuration " "worker" {
name_prefix = "terraform_worker_"
image_id = "${var.host_ami}"
instance_type = "${var.host_instance_type}"
iam_instance_profile = "${var.host_profile}"
security_groups = [
"${compact(concat(list(aws_security_group.worker_sg.id), split( ",", var.host_security_group_ids))) }"
]
associate_public_ip_address = false
ebs_optimized = false // To enable tiny instances
root_block_device {
volume_type = "gp2"
volume_size = "${var.host_root_volume_size }"
delete_on_termination = true
}
More on next slide
8 . 9
workers.tf (2)
user_data = <<EOF
#cloud-config
rancher:
services:
rancher-agent1:
image: ${var.rancher_image}
environment:
- CATTLE_AGENT_IP= $private_ipv4
- CATTLE_HOST_LABELS= ${join("&", split(",", var.rancher_host_labels))}
command: ${var.rancher_server_url}/scripts/ ${var.rancher_env_token}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
privileged: true
EOF
lifecycle {
create_before_destroy = true
}
}
Continued from previous slide
8 . 10
workers.tf (3)
resource "aws_autoscaling_group" "rancher" {
max_size = "${var.max_host_capacity}"
min_size = "${var.min_host_capacity}"
desired_capacity = "${var.desired_host_capacity }"
launch_configuration = "${aws_launch_configuration.worker.id }"
health_check_type = "${var.host_health_check_type }"
health_check_grace_period = "${var.host_health_check_grace_period }"
load_balancers = [ "${compact(split(",", var.loadbalancer_ids)) }" ]
vpc_zone_identifier = [
"${split(",", var.host_subnet_ids)}"
]
tag {
key = "Name"
value = "${var.name}-host"
propagate_at_launch = true
}
}
8 . 118 . 12
workers.tf (4)
resource "aws_security_group" "worker_sg" {
description = "Allow traffic to worker instances "
vpc_id = "${var.vpc_id}"
}
workers.tf (5)
resource "aws_security_group_rule" "rancher_upd_500_ingress" {
type = "ingress"
from_port = 500
to_port = 500
protocol = "udp"
security_group_id = "${aws_security_group.worker_sg.id }"
self = true
}
resource "aws_security_group_rule" "rancher_upd_4500_ingress " {
type = "ingress"
from_port = 4500
to_port = 4500
protocol = "udp"
security_group_id = "${aws_security_group.worker_sg.id }"
self = true
}
8 . 13
workers.tf (6)
resource "aws_security_group_rule" "rancher_upd_500_egress" {
type = "egress"
from_port = 500
to_port = 500
protocol = "udp"
security_group_id = "${aws_security_group.worker_sg.id }"
self = true
}
resource "aws_security_group_rule" "rancher_upd_4500_egress" {
type = "egress"
from_port = 4500
to_port = 4500
protocol = "udp"
security_group_id = "${aws_security_group.worker_sg.id }"
self = true
}
8 . 14
workers.tf (7)
resource "aws_security_group_rule" "rancher_egress" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
security_group_id = "${aws_security_group.worker_sg.id }"
cidr_blocks = [
"0.0.0.0/0"
]
}
8 . 159 . 1
Cluster
cluster.tf (1)
module "hosts" {
source = "../modules/host-workers"
name = "${var.name}-cluster"
desired_host_capacity= "2"
host_key_name = "recovery"
vpc_id = "${module.vpc.vpc_id}"
host_subnet_ids = "${join(",", module.vpc.private_subnets) }"
rancher_server_url = "https://rancher.grydeske.com/v1 "
rancher_env_token = "D93C1B9F627E1B7168AE:1483142400000:7lZFwjs9lDSQskK9fbXCPwiPL2g "
rancher_host_labels = "region=eu-west-1,type.app=true,type.network=true "
loadbalancer_ids = "${aws_elb.cluster-elb-public.id}"
host_security_group_ids = "${aws_security_group.vpc_sg_within.id }"
host_ami = "ami-75cbcb13"
}
9 . 2
cluster.tf (2)
resource "aws_elb" "cluster-elb-public" {
subnets = ["${module.vpc.public_subnets }"]
security_groups = [ "${aws_security_group.web_sg.id }", "${aws_security_group.vpc_sg_within.id }" ]
listener {
lb_port = 80
lb_protocol = "HTTP"
instance_port = 80
instance_protocol = "HTTP" }
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 5
target = "TCP:80"
interval = 10 }
cross_zone_load_balancing = true
tags { Cluster = "${var.name}" }
}
9 . 310 . 1
DNS
dns.tf (1)
resource "aws_route53_zone" "gr8conf_domain" {
lifecycle {
prevent_destroy = true
}
name = "grydeske.org"
}
output "domain_ns_servers" {
// There are 4 in all
value = "n${aws_route53_zone.gr8conf_domain.name_servers .0}n${aws_route53_zone.gr8conf_domain.name_servers .1}n${aws_route53_zone.gr8con
}
10 . 2
cluster.tf (3)
resource "aws_route53_record" "dns-wildcard" {
name = "*"
zone_id = "${aws_route53_zone.gr8conf_domain.id }"
type = "A"
alias {
name = "${aws_elb.cluster-elb-public.dns_name}"
zone_id = "${aws_elb.cluster-elb-public.zone_id}"
evaluate_target_health = true
}
}
10 . 3
Lets spin up some infrastructure!
11 . 1
Deploying on Rancher
10 . 411 . 2
Rancher Features
Environments
Stacks
Services
Rancher Features
11 . 312
Literature
https://www.terraform.io
http://www.oreilly.com/pub/e/3615
http://rancher.com
13
Questions

More Related Content

DevOps Enabling Your Team

  • 1. 1 DevOps Enabling Your TeamJacob Aae Mikkelsen
  • 2. Agenda Goals Infrastructure Infrastructure as Code with Terraform Docker Orchestration with Rancher Deploying an App
  • 3. 2 . 1 Jacob Aae Mikkelsen Senior Software Architect at Cardlay A/S GR8Conf EU - Organizing Team Member External Associate Professor - University of Southern Denmark Groovy Ecosystem Nerd @JacobAae Blogs The Grails Diary
  • 4. 2 . 22 . 3 Disclaimer Some knowledge of infrastructure is assumed
  • 5. Goals De ne a cloud based infrastructure as code Spin up the infrastructure from scratch Docker Orchestration tool Deploy a demo app
  • 6. 3 . 13 . 2 Motivation
  • 7. Devops Devops de nition — theagileadmin.com/what-is-devops DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.
  • 8. 3 . 33 . 4 Devops culture Primary corollary — theagileadmin.com/what-is-devops DevOps is also characterized by operations staff making use many of the same techniques as developers for their systems work.
  • 9. 3 . 5 Reality In reality, silos exist in IT and DevOps teams are interested in tearing down those silos. Enable development team to handle deployment and con guration themselves
  • 10. Requirement We need to understand the layers below our code Hosting platform Domain Names Administration (DNS) Network Automation of infrastructure
  • 11. 3 . 64 . 1 Infrastructure
  • 12. 4 . 2 Requirement AWS VPC with hosts that can serve docker images Orchestrated by a Rancher Server
  • 15. 4 . 3 Provisioning Single machine Single application server Single database server Tools Ansible Chef Puppet
  • 16. 4 . 4 Orchestration Making all the singles mingle Connects the applikation server to a valid database server Networking Service discovery Tools Terraform CloudFormation
  • 17. 4 . 5 Application Orchestration
  • 18. 5 . 15 . 2 Rancher — Rancher Website Rancher Labs develops open source software that makes it easy to deploy and manage Docker containers and Kubernetes in production on any infrastructure.
  • 20. 5 . 3
  • 21. 6 . 16 . 2 Terraform — Terraform website Terraform is a tool for building, changing, and versioning infrastructure safely and ef ciently.
  • 22. Key Features Infrastructure as Code Execution Plans Resource Graph Change Automation
  • 23. 6 . 3 We get Repeatable Versioned Documented Automated Testable Shareable Handling of infrastructure
  • 24. 6 . 4 How it Works Terraform creates a D-A-G of tasks
  • 25. 6 . 56 . 6 Con guration The set of les used to describe infrastructure in Terraform is simply known as a Terraform con guration.
  • 26. Con guration HashiCorp Con guration Language (HCL) Less verbose than JSON More concise than YAML Restricted subset (compared to programming language) Any tool can also accept JSON Allows comments
  • 27. 6 . 76 . 8 Key components Providers Resourses Provisioners
  • 28. 6 . 9 Providers Account details for fx AWS
  • 29. 6 . 10 Resourses Logical representation of "Physical" resource (physical item, even though it is a virtual server) De nes the desired state of a resource
  • 30. 6 . 11 Provisioners Post-creation "initialization" of resource Has access to the current properties of a resource
  • 31. 6 . 12 Terraform les All .tf /.tf.json les in working directory are loaded and appended (in alphabetical order) Duplicate resources not allowed
  • 32. 6 . 13 Modules Modules are reusable components that can take be con gured with inputs and deliver output for use in other scripts
  • 33. 7 . 1 Sample Infrastructure
  • 34. Structure . ├── modules │   └── host-workers │   ├── workers.tf │   ├── inputs.tf │   └── outputs.tf └── terraform ├── aws.tf ├── cluster.tf ├── dns.tf ├── terraform.tf ├── variables.tf └── vpc.tf
  • 35. 7 . 27 . 3 Structure comments modules: Shared code, here describing the con guration of a rancher host machine terraform: Terraform les for the cluster we are trying to create
  • 36. 7 . 4 aws.tf provider "aws" { region = "eu-west-1" profile = "gr8conf" // access_key = "${var.aws_access_key}" // secret_key = "${var.aws_secret_key}" }
  • 37. variables.tf (1) variable "name" { description = "The name given to the cluster environment " default = "gr8conf" } variable "vpc_cidr" { description = "The network CIDR." default = "172.16.0.0/16" }
  • 38. 7 . 5 variables.tf (2) variable "cidrs_public_subnets" { description = "The CIDR ranges for public subnets. " default = ["172.16.0.0/24", "172.16.1.0/24", "172.16.2.0/24"] } variable "cidrs_private_subnet" { description = "CIDRs for private subnets. " default = ["172.16.128.0/24", "172.16.129.0/24", "172.16.130.0/24"] }
  • 39. 7 . 6 vpc.tf (1) module "vpc" { source = "github.com/terraform-community-modules/tf_aws_vpc " name = "${var.name}-vpc" cidr = "${var.vpc_cidr}" private_subnets = "${var.cidrs_private_subnet }" public_subnets = "${var.cidrs_public_subnets }" enable_dns_hostnames = true enable_dns_support = true azs = [ "eu-west-1a", "eu-west-1b", "eu-west-1c"] enable_nat_gateway = "true" tags { "Terraform" = "true" "Environment" = "GR8Conf" } }
  • 40. 7 . 7 vpc.tf (2) resource "aws_security_group" "vpc_sg_within" { name_prefix = "${var.name}" vpc_id = "${module.vpc.vpc_id}" ingress { from_port = 0 to_port = 0 protocol = "-1" self = true } egress { from_port = 0 to_port = 0 protocol = "-1" self = true } lifecycle { create_before_destroy = true } }
  • 41. 7 . 8 vpc.tf (3) resource "aws_security_group" "web_sg" { name_prefix = "${var.name}" vpc_id = "${module.vpc.vpc_id}" ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0" // Restrict IP range access here ] } }
  • 42. 7 . 97 . 10 vpc.tf (4) resource "aws_eip" "nat" { vpc = true } resource "aws_nat_gateway" "nat" { allocation_id = "${aws_eip.nat.id}" subnet_id = "${element(module.vpc.public_subnets, 0)}" }
  • 43. 8 . 1 Host Worker module
  • 44. inputs.tf (1) variable "host_ami" { description = "Ami for the host" default = "ami-75cbcb13" // rancheros-v1.0.1-hvm-1 } variable "host_instance_type" { description = "The instance type for the hosts " default = "t2.micro" } variable "vpc_id" { description = "The VPC to launch resources in " } variable "name" { description = "The cluster name" }
  • 45. 8 . 2 inputs.tf (2) variable "host_subnet_ids" { description = "The subnets to launch the hosts in " } variable "host_security_group_ids" { description = "Additional security groups to apply to hosts " default = "" } variable "host_root_volume_size" { description = "The size of the root EBS volume in GB " default = 24 } variable "loadbalancer_ids" { description = "The loadbalancers to attach to the auto scaling group " default = "" }
  • 46. 8 . 3 inputs.tf (3) variable "rancher_image" { description = "The Docker image to run Rancher from " default = "rancher/agent:v1.2.2" } variable "rancher_server_url" { description = "The URL for the Rancher server (including the version, i.e. rancher.gr8conf.org/v1) " } variable "rancher_env_token" { description = "The Rancher environment token hosts will join rancher with " } variable "rancher_host_labels" { description = "Comma separate k=v labels to apply to all rancher hosts " default = "" }
  • 47. 8 . 4 inputs.tf (4) variable "min_host_capacity" { description = "The miminum capacity for the auto scaling group " default = 1 } variable "max_host_capacity" { description = "The maximum capacity for the auto scaling group " default = 4 } variable "desired_host_capacity" { description = "The desired capacity for the auto scaling group " default = 1 }
  • 48. 8 . 5 inputs.tf (5) variable "host_health_check_type" { description = "Whether to use EC2 or ELB healthchecks in the ELB " default = "EC2" } variable "host_health_check_grace_period " { description = "The grace period for autoscaling group health checks " default = 300 }
  • 49. 8 . 68 . 7 inputs.tf (6) variable "host_profile" { description = "The IAM profile to assign to the instances " default = "" } variable "host_key_name" { description = "The EC2 KeyPair to use for the machine " }
  • 50. 8 . 8 outputs.tf output "hosts_security_group" { value = "${aws_security_group.worker_sg.id }" }
  • 51. workers.tf (1) resource "aws_launch_configuration " "worker" { name_prefix = "terraform_worker_" image_id = "${var.host_ami}" instance_type = "${var.host_instance_type}" iam_instance_profile = "${var.host_profile}" security_groups = [ "${compact(concat(list(aws_security_group.worker_sg.id), split( ",", var.host_security_group_ids))) }" ] associate_public_ip_address = false ebs_optimized = false // To enable tiny instances root_block_device { volume_type = "gp2" volume_size = "${var.host_root_volume_size }" delete_on_termination = true } More on next slide
  • 52. 8 . 9 workers.tf (2) user_data = <<EOF #cloud-config rancher: services: rancher-agent1: image: ${var.rancher_image} environment: - CATTLE_AGENT_IP= $private_ipv4 - CATTLE_HOST_LABELS= ${join("&", split(",", var.rancher_host_labels))} command: ${var.rancher_server_url}/scripts/ ${var.rancher_env_token} volumes: - /var/run/docker.sock:/var/run/docker.sock privileged: true EOF lifecycle { create_before_destroy = true } } Continued from previous slide
  • 53. 8 . 10 workers.tf (3) resource "aws_autoscaling_group" "rancher" { max_size = "${var.max_host_capacity}" min_size = "${var.min_host_capacity}" desired_capacity = "${var.desired_host_capacity }" launch_configuration = "${aws_launch_configuration.worker.id }" health_check_type = "${var.host_health_check_type }" health_check_grace_period = "${var.host_health_check_grace_period }" load_balancers = [ "${compact(split(",", var.loadbalancer_ids)) }" ] vpc_zone_identifier = [ "${split(",", var.host_subnet_ids)}" ] tag { key = "Name" value = "${var.name}-host" propagate_at_launch = true } }
  • 54. 8 . 118 . 12 workers.tf (4) resource "aws_security_group" "worker_sg" { description = "Allow traffic to worker instances " vpc_id = "${var.vpc_id}" }
  • 55. workers.tf (5) resource "aws_security_group_rule" "rancher_upd_500_ingress" { type = "ingress" from_port = 500 to_port = 500 protocol = "udp" security_group_id = "${aws_security_group.worker_sg.id }" self = true } resource "aws_security_group_rule" "rancher_upd_4500_ingress " { type = "ingress" from_port = 4500 to_port = 4500 protocol = "udp" security_group_id = "${aws_security_group.worker_sg.id }" self = true }
  • 56. 8 . 13 workers.tf (6) resource "aws_security_group_rule" "rancher_upd_500_egress" { type = "egress" from_port = 500 to_port = 500 protocol = "udp" security_group_id = "${aws_security_group.worker_sg.id }" self = true } resource "aws_security_group_rule" "rancher_upd_4500_egress" { type = "egress" from_port = 4500 to_port = 4500 protocol = "udp" security_group_id = "${aws_security_group.worker_sg.id }" self = true }
  • 57. 8 . 14 workers.tf (7) resource "aws_security_group_rule" "rancher_egress" { type = "egress" from_port = 0 to_port = 0 protocol = "-1" security_group_id = "${aws_security_group.worker_sg.id }" cidr_blocks = [ "0.0.0.0/0" ] }
  • 58. 8 . 159 . 1 Cluster
  • 59. cluster.tf (1) module "hosts" { source = "../modules/host-workers" name = "${var.name}-cluster" desired_host_capacity= "2" host_key_name = "recovery" vpc_id = "${module.vpc.vpc_id}" host_subnet_ids = "${join(",", module.vpc.private_subnets) }" rancher_server_url = "https://rancher.grydeske.com/v1 " rancher_env_token = "D93C1B9F627E1B7168AE:1483142400000:7lZFwjs9lDSQskK9fbXCPwiPL2g " rancher_host_labels = "region=eu-west-1,type.app=true,type.network=true " loadbalancer_ids = "${aws_elb.cluster-elb-public.id}" host_security_group_ids = "${aws_security_group.vpc_sg_within.id }" host_ami = "ami-75cbcb13" }
  • 60. 9 . 2 cluster.tf (2) resource "aws_elb" "cluster-elb-public" { subnets = ["${module.vpc.public_subnets }"] security_groups = [ "${aws_security_group.web_sg.id }", "${aws_security_group.vpc_sg_within.id }" ] listener { lb_port = 80 lb_protocol = "HTTP" instance_port = 80 instance_protocol = "HTTP" } health_check { healthy_threshold = 2 unhealthy_threshold = 2 timeout = 5 target = "TCP:80" interval = 10 } cross_zone_load_balancing = true tags { Cluster = "${var.name}" } }
  • 61. 9 . 310 . 1 DNS
  • 62. dns.tf (1) resource "aws_route53_zone" "gr8conf_domain" { lifecycle { prevent_destroy = true } name = "grydeske.org" } output "domain_ns_servers" { // There are 4 in all value = "n${aws_route53_zone.gr8conf_domain.name_servers .0}n${aws_route53_zone.gr8conf_domain.name_servers .1}n${aws_route53_zone.gr8con }
  • 63. 10 . 2 cluster.tf (3) resource "aws_route53_record" "dns-wildcard" { name = "*" zone_id = "${aws_route53_zone.gr8conf_domain.id }" type = "A" alias { name = "${aws_elb.cluster-elb-public.dns_name}" zone_id = "${aws_elb.cluster-elb-public.zone_id}" evaluate_target_health = true } }
  • 64. 10 . 3 Lets spin up some infrastructure!
  • 65. 11 . 1 Deploying on Rancher
  • 66. 10 . 411 . 2 Rancher Features Environments Stacks Services