The document discusses persisting data in Kubernetes clusters using OpenEBS. It describes OpenEBS components like the Maya API server, Node Disk Manager (NDM), and Local PV Provisioner that enable persistent storage. NDM discovers and manages block devices, the provisioner creates local persistent volumes, and Maya API extends the Kubernetes API for storage management. OpenEBS provides container-attached storage for stateful applications in ephemeral Kubernetes environments.
- Concur is a travel and expense management company with 6500+ employees and offices worldwide. They process over 70 million transactions and $50 billion in travel and expense spend annually. - The presenter is a Principal Architect at Concur who has been working with Kubernetes since 2015. He discusses why Concur chose Kubernetes and CoreOS for container orchestration. - Concur runs multiple Kubernetes clusters across different regions for high availability. A custom tool called kube2cnqr manages load balancing between clusters.
The document discusses CoreOS's expertise across the technology stack for container-based applications. This includes Linux, container engines, container image specifications, clustered databases like etcd, cloud independence, identity federation, and more. CoreOS is focused on open standards through initiatives like the Open Container Initiative and ensuring technologies like Kubernetes, rkt, and etcd can scale to power large production deployments.
Presented by Giorgio Regni, CTO Try Scality S3 Server Today! https://s3.scality.com/ http://www.scality.com/scality-s3-server/ https://hub.docker.com/r/scality/s3server/
https://go.dok.community/slack https://dok.community/ ABSTRACT OF THE TALK Complex computational workloads in Python are a common sight these days, especially in the context of processing large and complex datasets. Battle-hardened modules such as Numpy, Pandas, and Scikit-Learn can perform low-level tasks, while tools like Dask makes it easy to parallelize these workloads across distributed computational environments. Meanwhile, Argo Workflows offers a Kubernetes-native solution to provisioning cloud resources in Kubernetes and triggering workflows on a regular schedule. Being Kubernetes-native, Argo Workflows also meshes nicely with other Kubernetes tools. This talk discusses the combination of these two worlds by showcasing a set-up for Argo-managed workflows which schedule and automatically scale-out Dask-powered data pipelines in Python. BIO Former academic in the field of renewable energy simulation and energy systems analysis. Currently responsible for architecting and maintaining the cloud- and data strategy at ACCURE Battery Intelligence KEY TAKE-AWAYS FROM THE TALK Argo Workflows + Dask is a nice combination for data-processing pipelines. There are a a few "gotchyas" to be on the look-out for, but in nevertheless this is still a generally-applicable and powerful combination. https://github.com/sevberg
From NOVA Cloud and Software Engineering Group meetup, Feb. 17, 2021 https://youtu.be/a5uPm1mPLKQ. Hardening a Kubernetes cluster happens at different levels. We have to examine the nodes where Kubernetes is running. We want to secure the Kubernetes objects and workloads and review the files we used to create them. And we need to look for vulnerabilities in the containers we are using. Gene will show you some open-source tools that can find issues and vulnerabilities at each layer. All of them can be used in a pipeline to build your Kubernetes cluster safely and keep it secure. Gene Gotimer is the meetup organizer and a DevSecOps Senior Engineer at Steampunk, focusing on agile processes, secure development practices, and automation. Gene feels strongly that repeatability, quality, and security are all strongly intertwined; each depends on the other two, making agile and DevSecOps that much more crucial to software development.
This presentation was made as part of the Container Conference 2018 - www.containerconf.in "Containers have gained lot of attention ever since it came into existence. And why not? With the speed and ease it provides for running user application, it is definitely the most preferred solution for many of the real world use cases. OpenStack, on the other hand is a cloud solution which has always evolved in supporting newer technologies. OpenStack have many projects around containers that tries to cater the practical use cases. Some of the real world use cases that OpenStack fulfils are: OpenStack deployment could be very complex and so is its upgrade. OpenStack Helm, Triple-O and Kolla uses Kubernetes, Docker that helps its users to easily deploy and upgrade their cloud. Containers lacks the security as compared to VMs, so many users want to run their application on secure environment. OpenStack Zun enables Clear Containers and Kata Containers that provides the security of VMs and speed of containers. Other use cases include running Kubernetes cluster on OpenStack, CI/CD, managing applications using microservices which can be done by Magnum, Zuul, Zun respectively. In this presentation, we will talk about the practical use cases where containers can help us and what OpenStack provides to fulfill those requirements."
Autoscaling of workloads in the Kubernetes environment. A slidedeck about Pod and Node autoscaling and the machinery behind it that makes it happen. Few recommendations for Pod and Node autoscaling while implementing it.
This document discusses Scality's experiences building their first Node.js project. It summarizes that the project was building a TiVo-like cloud service for 25 million users, which required high parallelism and throughput of terabytes per second. It also discusses lessons learned around logging performance, optimizing the event loop and buffers, and useful Node.js tools.
eBay is one of the largest OpenStack based Clouds in the world. As eBay evolves into the world of Containers and Microservices, Kubernetes is quickly becoming a key platform. This talk is about how we applied our learnings from OpenStack to build a framework for managing life-cycle of Kubernetes at scale.
Empieza a usar Elastic Stack en Kubernetes con Elastic Cloud en Kubernetes (ECK), diseñado con el patrón del operador para actualizaciones de versiones, cambios de ajustes, alta disponibilidad, seguridad, etc.
This document provides an agenda and overview of Kafka on Kubernetes. It begins with an introduction to Kafka fundamentals and messaging systems. It then discusses key ideas behind Kafka's architecture like data parallelism and batching. The rest of the document explains various Kafka concepts in detail like topics, partitions, producers, consumers, and replication. It also introduces Kubernetes concepts relevant for running Kafka like StatefulSets, StorageClasses and the operator pattern. The goal is to help understand how to build event-driven systems using Kafka and deploy it on Kubernetes.
The combination of Docker and Kubernetes is quickly becoming the de-facto standard for building Microservices. Whether you are a developer or an architect you need to know how to bundle your application into Containers and Pods. Docker and Kubernetes give a lot of good features out of the box. To effectively leverage these features, you need to know - how to use them, what are some commonly used Pod design patterns and the best practices. In this webinar, we will explore various such questions and their answers along with appropriate examples. Some of those questions would be- 1. When and how to build multi-container pods? 2. What are some of the well-adopted design patterns for pods? 3. What are some multi-pod design patterns? 4. How to use Lifecycle hooks, Init Containers and Health probes? Github repo - https://github.com/ashishrpandey/pod-design-pattern-webinar
AKS reduces the complexity of managing Kubernetes by offloading operations to Azure. It allows easy creation and management of Kubernetes clusters through simple CLI commands. AKS supports advanced networking features in Azure like VNET integration and ingress controllers. It also enables integration with other Azure services for storage, databases, and monitoring through open service brokers.
This document discusses using Kubernetes as an underlay platform for OpenStack. Some key points: 1. Kubernetes is becoming more widely used and understood by operators compared to OpenStack. Using Kubernetes as an underlay could improve simplicity, stability, and upgrade processes for OpenStack. 2. There are still many technical challenges to address, such as networking, storage, tooling to manage OpenStack on Kubernetes, and ensuring containers meet Kubernetes' immutable infrastructure requirements. 3. Using Kubernetes as an underlay risks further confusing the messaging around OpenStack by implying Kubernetes is more stable or a replacement target. Clear communication will be important to avoid undermining OpenStack.
Agenda 1. The changing landscape of IT Infrastructure 2. Containers - An introduction 3. Container management systems 4. Kubernetes 5. Containers and DevOps 6. Future of Infrastructure Mgmt About the talk In this talk, you will get a review of the components & the benefits of Container technologies - Docker & Kubernetes. The talk focuses on making the solution platform-independent. It gives an insight into Docker and Kubernetes for consistent and reliable Deployment. We talk about how the containers fit and improve your DevOps ecosystem and how to get started with containerization. Learn new deployment approach to effectively use your infrastructure resources to minimize the overall cost.
Container Orchestration 之爭已經落幕,Kubernetes 成為主流,AWS, Azure 跟 GCP 都已提出相對應的解決方案, 但該選擇廠商所提供的服務或是自己架設呢?如何把 Stateless 甚至是 Stateful 應用服務運行於其上呢?部署應用程式到 Kubernetes 之中該如何做比較好?本分享談及多次在公司導入及維運 Kubernetes 的相關經驗,讓有興趣或是剛使用的人可以減少摸索的時間
The document discusses using Kubernetes as an orchestrator for A10 Lightning Controller. Some key points: 1) Kubernetes allows for automatic recovery of pods on failure, easy rolling upgrades of code, and automated scaling of microservices. 2) Using Kubernetes allows the controller to be deployed on-premise and scaled across multiple VMs, with automated launching and scaling. Installation is also now independent of the underlying infrastructure. 3) The journey involved moving from a manual deployment to a Kubernetes deployment, which simplified overlay networking, environment variable passing, and simplified adding/replacing nodes.
VMUGIT Meeting - Lecce, 5 Aprile 2018 Fabio Rapposelli, VMware Staff Engineer, Cloud Native Apps - Introduzione a Kubernetes e ai workload Cloud Native