SlideShare a Scribd company logo
Infrastructure as code with
Puppet and Apache CloudStack
David Nalley
ke4qqq@apache.org
@ke4qqq
#whoami
• Apache Software Foundation Member
• Apache CloudStack PMC Member
• Recovering Sysadmin
• Fedora Project Contributor
• Zenoss contributor
• Employed by Citrix in the Open Source Business Office
Setting the stage
Apache CloudStack is...
● an open source IaaS platform
● proven in production at massive scale
● awesome
Gorgeous UI
API
● Native: http://cloudstack.apache.org/docs/api
● EC2
IaaS removes one constraint
No longer waiting days or weeks to get a VM provisioned
but introduces another...
Now you have to get a machine configured in a timely
manner.
Self service
● UI
● API
● Some external tool
People provision stuff...
Not ops folks
Often not familiar with environmental intricacies
Don't care
Baseline can be important....
Classification
Problem: We spin up, dynamically, 1-500 VMs at any given time - how do
we decide what configurations apply.
Classification
The wrong way - dedicated images for each purpose
Classification
editing nodes.pp
node 'foo-356.cloud.com' {
include httpd
}
Classification
globbing
node 'mysql*' {
include mysqld
}
Classification
Everything is default
node 'default' {
include httpd
}
Classification
External Node Classifier
Classification
External Node Classifier
Classification
Facts
class base {
case $::fact {
'httpd': {
include httpd
}
'otherrole': {
include nginx
}
}
}
Classification - One Solution
● During instance provisioning define metadata.
● Custom fact for that metadata
● Case statement based on that fact
Example Metadata
role=webserver
location=datacenter1
environment=production
Corresponding manifest
class base {
case $::fact {
'webserver': {
include httpd
}
'database': {
include postgresql
}
}
}
Corresponding manifest
class base {
case $::fact {
'webserver': {
include httpd
}
'database': {
include postgresql
}
}
}
Links, et al.
● Fact:
http://s.apache.org/acs_userdata
● Blog with details:
http://s.apache.org/acs_userdata2
Video - go watch it
● I only have 45 minutes - so can't delve
into everything, you should watch the
video- it’s great.
● http://youtu.be/c8YWctfOpwo
Video - go watch it
● I only have 45 minutes - so can't delve
into everything, you should watch the
video- it’s great.
● http://youtu.be/c8YWctfOpwo
And then there was a knife-plugin
The folks at Edmunds.com wrote a knife plugin for
CloudStack
The knife plugin had the ability to define an application
stack, potentially hundreds of nodes, that are interrelated,
and provision them with a single knife command.
https://github.com/cloudstack-extras/knife-cloudstack
Deploying a machine with knife
~ knife cs server create
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"]
}
]
},
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master],
role[hbase_master]"
},
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-
c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker],
role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-
master}:50070/index.jsp" }
]
}
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-base",
"service": "small",
"port_rules": "2181",
"run_list": "role[cluster_a], role[zookeeper_server]",
"actions": [
{ "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
]
},
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks": "app-net, storage-net",
"port_rules": "50070, 50030, 60010",
"run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
},
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-base",
"service": "medium",
"port_rules": "50075, 50060, 60030",
"run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
"actions": [
{ "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
{ "http_request": "http://${hadoop-master}:50070/index.jsp" }
]
}
}
Deploy that Hadoop cluster with
knife cs stack create hadoop_cluster_a
I was jealous....
Then at FOSDEM 2012
● CloudStack user shows me Puppet types and resources
for OpenNebula.
● https://puppetlabs.com/blog/puppetizing-opennebula/
● They indicated they wanted this awesomeness for
CloudStack....
Why?
● They wanted to define each of their application
stacks in puppet, so that not only the configuration
of software on the machine, but the machines
themselves would be configured by Puppet.
● Automated deployment of test environments that
are exactly the same
● Really gets outside of machine configuration to
entire infrastructure configuration
What we are used to
● Puppet _defines_ the configuration
within the machine
What we want
● Puppet _defines_ the machine.
● Puppet _defines_ collection of
machines
● Puppet _defines_ the machines,
networks, and rest of infrastructure
Then at Puppetconf
● There was Google Compute
Engine types and resources for
Puppet.
● Dan Bode gave a presentation showing off the work he
had done... that presentation is worth seeing...
● http://www.slideshare.net/bodepd/google-compute-presentati
Puppet and Apache CloudStack
And then for Christmas
● puppet types and providers arrived - courtesy of Dan
Bode
● https://github.com/bodepd/cloudstack_resource
s
How does this work?
cloudstack_instance { 'foo1':
ensure => present,
flavor => 'Small Instance',
zone => 'FMT-ACS-001',
image => 'CentOS 5.6(64-bit) no GUI
(XenServer)',
network => 'puppetlabs-network',
# domain
# account
# hostname
}
●
Setting defaults
Cloudstack_instance {
image => 'CentOS 6.3',
flavor => 'M1.medium',
zone => 'San Jose',
network => 'davids_net',
keypair => 'david_keys',
}
cloudstack_instance {
ensure => $::ensure,
group => 'role=db',
}
A simple stack
class my_web_stack {
cloudstack_instance { 'foo4':
ensure => present,
group => 'role=apache',
}
cloudstack_instance { 'foo5':
ensure => present,
group => 'role=db',
}
}
Questions
Contact
● Project
– http://cloudstack.apache.org
– #cloudstack on irc.freenode.net
● Me
– ke4qqq on irc.freenode.net
– ke4qqq@apache.org

More Related Content

Puppet and Apache CloudStack

  • 1. Infrastructure as code with Puppet and Apache CloudStack David Nalley ke4qqq@apache.org @ke4qqq
  • 2. #whoami • Apache Software Foundation Member • Apache CloudStack PMC Member • Recovering Sysadmin • Fedora Project Contributor • Zenoss contributor • Employed by Citrix in the Open Source Business Office
  • 3. Setting the stage Apache CloudStack is... ● an open source IaaS platform ● proven in production at massive scale ● awesome
  • 6. IaaS removes one constraint No longer waiting days or weeks to get a VM provisioned
  • 7. but introduces another... Now you have to get a machine configured in a timely manner.
  • 8. Self service ● UI ● API ● Some external tool
  • 9. People provision stuff... Not ops folks Often not familiar with environmental intricacies Don't care
  • 10. Baseline can be important....
  • 11. Classification Problem: We spin up, dynamically, 1-500 VMs at any given time - how do we decide what configurations apply.
  • 12. Classification The wrong way - dedicated images for each purpose
  • 15. Classification Everything is default node 'default' { include httpd }
  • 18. Classification Facts class base { case $::fact { 'httpd': { include httpd } 'otherrole': { include nginx } } }
  • 19. Classification - One Solution ● During instance provisioning define metadata. ● Custom fact for that metadata ● Case statement based on that fact
  • 21. Corresponding manifest class base { case $::fact { 'webserver': { include httpd } 'database': { include postgresql } } }
  • 22. Corresponding manifest class base { case $::fact { 'webserver': { include httpd } 'database': { include postgresql } } }
  • 23. Links, et al. ● Fact: http://s.apache.org/acs_userdata ● Blog with details: http://s.apache.org/acs_userdata2
  • 24. Video - go watch it ● I only have 45 minutes - so can't delve into everything, you should watch the video- it’s great. ● http://youtu.be/c8YWctfOpwo
  • 25. Video - go watch it ● I only have 45 minutes - so can't delve into everything, you should watch the video- it’s great. ● http://youtu.be/c8YWctfOpwo
  • 26. And then there was a knife-plugin The folks at Edmunds.com wrote a knife plugin for CloudStack The knife plugin had the ability to define an application stack, potentially hundreds of nodes, that are interrelated, and provision them with a single knife command. https://github.com/cloudstack-extras/knife-cloudstack
  • 27. Deploying a machine with knife ~ knife cs server create
  • 28. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 29. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 30. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production",
  • 31. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 32. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 33. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 34. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 35. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 36. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 37. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 38. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 39. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
  • 40. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 41. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 42. { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" },
  • 43. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 44. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 45. { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker- c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop- master}:50070/index.jsp" } ] }
  • 46. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
  • 47. Deploy that Hadoop cluster with knife cs stack create hadoop_cluster_a
  • 49. Then at FOSDEM 2012 ● CloudStack user shows me Puppet types and resources for OpenNebula. ● https://puppetlabs.com/blog/puppetizing-opennebula/ ● They indicated they wanted this awesomeness for CloudStack....
  • 50. Why? ● They wanted to define each of their application stacks in puppet, so that not only the configuration of software on the machine, but the machines themselves would be configured by Puppet. ● Automated deployment of test environments that are exactly the same ● Really gets outside of machine configuration to entire infrastructure configuration
  • 51. What we are used to ● Puppet _defines_ the configuration within the machine
  • 52. What we want ● Puppet _defines_ the machine. ● Puppet _defines_ collection of machines ● Puppet _defines_ the machines, networks, and rest of infrastructure
  • 53. Then at Puppetconf ● There was Google Compute Engine types and resources for Puppet. ● Dan Bode gave a presentation showing off the work he had done... that presentation is worth seeing... ● http://www.slideshare.net/bodepd/google-compute-presentati
  • 55. And then for Christmas ● puppet types and providers arrived - courtesy of Dan Bode ● https://github.com/bodepd/cloudstack_resource s
  • 56. How does this work? cloudstack_instance { 'foo1': ensure => present, flavor => 'Small Instance', zone => 'FMT-ACS-001', image => 'CentOS 5.6(64-bit) no GUI (XenServer)', network => 'puppetlabs-network', # domain # account # hostname } ●
  • 57. Setting defaults Cloudstack_instance { image => 'CentOS 6.3', flavor => 'M1.medium', zone => 'San Jose', network => 'davids_net', keypair => 'david_keys', } cloudstack_instance { ensure => $::ensure, group => 'role=db', }
  • 58. A simple stack class my_web_stack { cloudstack_instance { 'foo4': ensure => present, group => 'role=apache', } cloudstack_instance { 'foo5': ensure => present, group => 'role=db', } }
  • 60. Contact ● Project – http://cloudstack.apache.org – #cloudstack on irc.freenode.net ● Me – ke4qqq on irc.freenode.net – ke4qqq@apache.org

Editor's Notes

  1. Machine names
  2. A human readable description
  3. Disk image
  4. Amount of CPU/RAM
  5. Firewall Rules
  6. Stuff for config mgmt.
  7. Only new stuff here is 'networks' Previously a default network was assigned.