teuto.net Netzdienste GmbH is an 18 employee Linux systems house and web development company located in Bielefeld, Germany. They have been offering an OpenStack Ceph storage service in a closed beta since September 2013, running on 20 compute nodes and 5 Ceph storage nodes. Key factors in their decision to use one Ceph cluster to meet all of OpenStack's storage needs included no single point of failure, seamless scalability, and commercial support from Inktank.
2. teuto.net Netzdienste GmbH
● 18 Mitarbeiter
● Linux Systemhaus und
Webdevelopment
● Ubuntu Advantage Partner
● Openstack Ceph Service
● Büros und Datacenter in
Bielefeld
3. Why Openstack ?
Infrastructure as a Sevice
● Cloud Init (automated Instance provisioning)
● Network Virtualization
● Multiple Storage Options
● Multiple APIs for Automation
4. ● closed beta since September 2013
● updated to Havana in October
● Ubuntu Cloud Archive
● 20 Compute Nodes
● 5 Ceph Nodes
● Additional Monitoring with Graphite
9. Key Facts for our Decision
● One Ceph Cluster fits all Openstack needs
● no „single Point of Failure“
● POSIX compatibility via Rados Block Device
● seamless scalability
● commercial support by Inktank
● GPL
10. Rados Block Storage
● Live migration
● Efficient Snapshots
● Different types of storage avaiable (tiering)
● Cloning for fast restore or scaling
11. How to start
● determine Clustersize
uneven amount of Nodes to enable negotiation
● Small start with at least 5 Nodes
● either 8 or 12 Disks per Chassis
● One Jounal per Disk
● 2 Journal SSD per Chassis
14. Ceph specifics
● Data is distributed throughout the Cluster
● Unfortunately this destroys Data locality
tradeoff between blocksize an iops.
● The bigger Blocks, the better is sequential
performance
● Double Write, SSD Journals strongly advised
● Longterm fragmentation by small writes
16. Ceph Monitoring in ostack
● Ensure Quality with Monitoring
● Easy spotting of congestion Problems
● Event Monitoring (e.g. disk failure)
● Capacity management
17. What we did
● Disk monitoring with Icinga
● Collect data via Ceph Admin Socket Json
interface
● put it into Graphite
● enrich it with Meta Data
– with Openstack tennant
– Ceph Node
– OSD