Kubernetes is an open-source platform for managing containerized applications across multiple hosts. It provides tools for deployment, scaling, and management of containers. Kubernetes handles tasks like scheduling containers on nodes, scaling resources, applying security policies, and monitoring applications. It ensures containers are running and if not, restarts them automatically.
This document provides an overview of Kubernetes basics. It introduces Kubernetes as an open source container orchestration tool developed by Google to manage the lifecycle of containers. It describes common Kubernetes concepts like pods, deployments, services, and how to install Kubernetes on local, on-premise and cloud environments. It also covers important topics for production use such as health checks, resource restrictions, logging, monitoring and alerts.
Free GitOps Workshop + Intro to Kubernetes & GitOps
Follow along in this free workshop and experience GitOps!
AGENDA:
Welcome - Tamao Nakahara, Head of DX (Weaveworks)
Introduction to Kubernetes & GitOps - Mark Emeis, Principal Engineer (Weaveworks)
Weave Gitops Overview - Tamao Nakahara
Free Gitops Workshop - David Harris, Product Manager (Weaveworks)
If you're new to Kubernetes and GitOps, we'll give you a brief introduction to both and how GitOps is the natural evolution of Kubernetes.
Weave GitOps Core is a continuous delivery product to run apps in any Kubernetes. It is free and open source, and you can get started today!
https://www.weave.works/product/gitops-core
If you’re stuck, also come talk to us at our Slack channel! #weave-gitops http://bit.ly/WeaveGitOpsSlack (If you need to invite yourself to the Slack, visit https://slack.weave.works/)
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called pods. Kubernetes can manage pods across a cluster of machines, providing scheduling, deployment, scaling, load balancing, volume mounting and networking. It is widely used by companies like Google, CERN and in large projects like processing images and analyzing particle interactions. Kubernetes is portable, can span multiple cloud providers, and continues growing to support new workloads and use cases.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It coordinates activities across a cluster of machines by defining basic building blocks like pods (which contain containers), replication controllers (which ensure a specified number of pods are running), and services (which define logical groups of pods). Kubernetes provides tools for running applications locally on a single node as well as managing resources in the cluster, including creating, deleting, viewing, and updating resources from configuration files.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
This document provides an overview of Kubernetes including:
- Kubernetes is an open source system for managing containerized applications and services across clusters of hosts. It provides tools to deploy, maintain, and scale applications.
- Kubernetes objects include pods, services, deployments, jobs, and others to define application components and how they relate.
- The Kubernetes architecture consists of a control plane running on the master including the API server, scheduler and controller manager. Nodes run the kubelet and kube-proxy to manage pods and services.
- Kubernetes can be deployed on AWS using tools like CloudFormation templates to automate cluster creation and management for high availability and scalability.
This document provides an overview of Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It describes Kubernetes' architecture including nodes, pods, replication controllers, services, and networking. It also discusses how to set up Kubernetes environments using Minikube or kubeadm and get started deploying pods and services.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally developed by Google based on years of experience running production workloads at scale. Kubernetes groups containers into logical units called pods and handles tasks like scheduling, health checking, scaling and rollbacks. The main components include a master node that manages the cluster and worker nodes that run application containers scheduled by the master.
This document provides an introduction to Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It first reviews what Docker is and its features like isolation and compatibility across platforms. It then explains that container orchestration is needed to manage thousands of containers across a cluster, ensure efficient resource use, and automate container lifecycles. Kubernetes is recommended because it is actively developed by major companies, makes scheduling and managing workloads easy through features like rolling updates, and has many extensions available.
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
- Archeology: before and without Kubernetes
- Deployment: kube-up, DCOS, GKE
- Core Architecture: the apiserver, the kubelet and the scheduler
- Compute Model: the pod, the service and the controller
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
In this session, we will discuss the architecture of a Kubernetes cluster. we will go through all the master and worker components of a kubernetes cluster. We will also discuss the basic terminology of Kubernetes cluster such as Pods, Deployments, Service etc. We will also cover networking inside Kuberneets. In the end, we will discuss options available for the setup of a Kubernetes cluster.
This document discusses microservices and containers, and how Kubernetes can be used for container orchestration. It begins with an overview of microservices and the benefits of breaking monolithic applications into independent microservices. It then discusses how containers can be used to package and deploy microservices, and introduces Docker as a container platform. Finally, it explains that as container usage grows, an orchestrator like Kubernetes is needed to manage multiple containers and microservices, and provides a high-level overview of Kubernetes' architecture and capabilities for scheduling, self-healing, scaling, and other management of containerized applications.
1. Minikube allows users to run a single-node Kubernetes cluster locally for development and testing. It provisions and manages a virtual machine running Kubernetes.
2. The steps to set up Minikube on Windows are to install Oracle VirtualBox, enable virtualization in BIOS, install Minikube and Kubectl, and start the Minikube cluster.
3. Some basic operations with Minikube include starting and stopping the cluster, getting the cluster status and IP, creating deployments and exposing services, and checking pod statuses. The Kubernetes dashboard is also available.
This document introduces Docker Compose, which allows defining and running multi-container Docker applications. It discusses that Docker Compose uses a YAML file to configure and run multi-service Docker apps. The 3 steps are to define services in a Dockerfile, define the app configuration in a Compose file, and run the containers with a single command. It also covers topics like networking, environment variables, and installing Docker Compose. Hands-on labs are provided to learn Compose through examples like WordPress.
Kubernetes (commonly referred to as "K8s") is an open-source system for automating deployment, scaling and management of containerized applications It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". We will see Kubernetes architecture, use cases, basics and live demo
This document provides an overview of containers, Kubernetes, and their key concepts. It discusses how Kubernetes manages containerized applications across clusters and abstracts away infrastructure details. The main components of Kubernetes include Pods (groups of tightly-coupled containers), ReplicationControllers (manages Pod replicas), Services (expose Pods to external traffic), and Namespaces (logical isolation of clusters). Kubernetes architecture separates the control plane running on the master from the nodes that run container workloads.
Kubernetes is an open-source container orchestration system that automates deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes services handle load balancing, networking, and execution of containers across a cluster of nodes. It addresses challenges in managing containers at scale through features like deployment and rolling update of containers, self-healing, resource allocation and monitoring.
Here is something to get you started with Kubernetes Container Orchestration.
Crevise Technologies has executed several projects to containerize applications for our clients. We have engineers who have expertise in Kubernetes, Docker, Openshift, Amazon ECS etc.
Docker uses a client-server architecture with a Docker client communicating with the Docker daemon. The daemon manages Docker objects like images, containers, networks and volumes. Kubernetes is an open-source system that automates deployment, scaling, and management of containerized applications. It ensures containers run as expected and acquires necessary resources. Key Kubernetes components include pods, deployments, services, nodes, and the control plane which manages the cluster.
Container orchestration engine for automating deployment, scaling, and management of containerized applications.
What are Microservices?
What is container?
What is Containerization?
What is Docker?
This document provides an overview of Kubernetes concepts including:
- Kubernetes architecture with masters running control plane components like the API server, scheduler, and controller manager, and nodes running pods and node agents.
- Key Kubernetes objects like pods, services, deployments, statefulsets, jobs and cronjobs that define and manage workloads.
- Networking concepts like services for service discovery, and ingress for external access.
- Storage with volumes, persistentvolumes, persistentvolumeclaims and storageclasses.
- Configuration with configmaps and secrets.
- Authentication and authorization using roles, rolebindings and serviceaccounts.
It also discusses Kubernetes installation with minikube, and common networking and deployment
The document provides an overview of Azure Kubernetes Service (AKS) including:
- AKS simplifies deployment, management, scaling and monitoring of containerized applications on Kubernetes.
- AKS uses a master-worker node architecture with master nodes managing the cluster state and worker nodes running application containers.
- Key AKS concepts include clusters, pods, deployments, replica sets, and services.
- The AKS architecture includes etcd, kube-apiserver, controller manager, kube-scheduler and cloud controller manager on the master node, and kubelet, container runtime and kube-proxy on worker nodes.
- Applications can be deployed to AKS through Kubernetes manifest
Containers are changing the compute landscape and for NFVi support of Containers is key. Kubernetes is a well known Container Cluster Management software and this is slide deck from a talk given in Opendaylight Summit 2016. This slide gives an insight about Microservice architecture, Kuberentes and how it can be integrated with ODL. Session Video can be found at https://www.youtube.com/watch?v=a4_pkp2qiX8&list=PL8F5jrwEpGAiRCzJIyboA8Di3_TAjTT-2
These slides were used during a technical session for the Cloud-Native El Salvador community. It covers the basic Kubernetes components, some installers and main Kubernetes resources. For the demo, it was used the capabilites provided by the Horizontal Pod Autoscaler.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Docker allows developers to build, deploy, run, update and manage containers. Containers are more lightweight than virtual machines and allow multiple applications to run on a single operating system kernel. Kubernetes is an open source tool for managing containerized workloads and services across clusters of hosts. It provides capabilities for self-healing, auto-scaling and load balancing of containers. K3s is a lightweight Kubernetes distribution designed for resource-constrained edge and IoT environments.
This document provides an overview of Kubernetes, including its architecture, components, concepts, and configuration. It describes that Kubernetes is an open-source container orchestration system designed by Google to manage containerized applications across multiple hosts. The key components include the master nodes which run control plane components like the API server, scheduler, and controller manager, and worker nodes which run the kubelet and containers. It also explains concepts like pods, services, deployments, networking, storage, and role-based access control (RBAC).
This document provides an introduction and overview of Kubernetes. It discusses that Kubernetes is an open-source system for managing containerized applications across multiple hosts. It supports various cloud providers and container platforms. Kubernetes provides self-healing capabilities to automatically place, restart, and replicate applications. The document outlines key Kubernetes concepts like masters, minions, pods, services and labels. It provides examples of simple pod and replication controller configurations. It also gives a high-level overview of the Kubernetes architecture and components.
This document discusses Docker and Azure Kubernetes Service (AKS). It provides an overview of containers and how Docker is a leading containerization platform. It describes how AKS uses Kubernetes for container orchestration to facilitate deployment, scaling, and management of containers across a cluster of virtual machines. Real-world use cases show how Docker and AKS can enable microservices architectures and support DevOps practices for faster software delivery.
Kubernetes provides logical abstractions for deploying and managing containerized applications across a cluster. The main concepts include pods (groups of containers), controllers that ensure desired pod states are maintained, services for exposing pods, and deployments for updating replicated pods. Kubernetes allows defining pod specifications that include containers, volumes, probes, restart policies, and more. Controllers like replica sets ensure the desired number of pod replicas are running. Services provide discovery of pods through labels and load balancing. Deployments are used to declaratively define and rollout updates to replicated applications.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
How RPA Help in the Transportation and Logistics Industry.pptx
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
Implementations of Fused Deposition Modeling in real world
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Advanced Techniques for Cyber Security Analysis and Anomaly Detection
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - Mydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
How Social Media Hackers Help You to See Your Wife's Message.pdf
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Choose our Linux Web Hosting for a seamless and successful online presence
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
Details of description part II: Describing images in practice - Tech Forum 2024
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called pods. Kubernetes masters manage the cluster and make scheduling decisions while nodes run the pods and containers. It uses labels and selectors to identify and group related application objects together. Services provide a single endpoint for pods, while deployments help manage replicated applications. Kubernetes provides mechanisms for storage, configuration, networking, security and other functionality to help run distributed systems reliably at scale.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called pods. Its main components include a master node that manages the cluster and worker nodes that run the applications. It uses labels to identify pods and services and selectors to group related pods. Common concepts include deployments for updating apps, services for network access, persistent volumes for storage, and roles/bindings for access control. The deployment process involves the API server, controllers, scheduler and kubelet to reconcile the desired state and place pods on nodes from images while providing discovery and load balancing.
This document provides an overview of Kubernetes basics. It introduces Kubernetes as an open source container orchestration tool developed by Google to manage the lifecycle of containers. It describes common Kubernetes concepts like pods, deployments, services, and how to install Kubernetes on local, on-premise and cloud environments. It also covers important topics for production use such as health checks, resource restrictions, logging, monitoring and alerts.
Free GitOps Workshop + Intro to Kubernetes & GitOpsWeaveworks
Follow along in this free workshop and experience GitOps!
AGENDA:
Welcome - Tamao Nakahara, Head of DX (Weaveworks)
Introduction to Kubernetes & GitOps - Mark Emeis, Principal Engineer (Weaveworks)
Weave Gitops Overview - Tamao Nakahara
Free Gitops Workshop - David Harris, Product Manager (Weaveworks)
If you're new to Kubernetes and GitOps, we'll give you a brief introduction to both and how GitOps is the natural evolution of Kubernetes.
Weave GitOps Core is a continuous delivery product to run apps in any Kubernetes. It is free and open source, and you can get started today!
https://www.weave.works/product/gitops-core
If you’re stuck, also come talk to us at our Slack channel! #weave-gitops http://bit.ly/WeaveGitOpsSlack (If you need to invite yourself to the Slack, visit https://slack.weave.works/)
Kubernetes: A Short Introduction (2019)Megan O'Keefe
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called pods. Kubernetes can manage pods across a cluster of machines, providing scheduling, deployment, scaling, load balancing, volume mounting and networking. It is widely used by companies like Google, CERN and in large projects like processing images and analyzing particle interactions. Kubernetes is portable, can span multiple cloud providers, and continues growing to support new workloads and use cases.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It coordinates activities across a cluster of machines by defining basic building blocks like pods (which contain containers), replication controllers (which ensure a specified number of pods are running), and services (which define logical groups of pods). Kubernetes provides tools for running applications locally on a single node as well as managing resources in the cluster, including creating, deleting, viewing, and updating resources from configuration files.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
This document provides an overview of Kubernetes including:
- Kubernetes is an open source system for managing containerized applications and services across clusters of hosts. It provides tools to deploy, maintain, and scale applications.
- Kubernetes objects include pods, services, deployments, jobs, and others to define application components and how they relate.
- The Kubernetes architecture consists of a control plane running on the master including the API server, scheduler and controller manager. Nodes run the kubelet and kube-proxy to manage pods and services.
- Kubernetes can be deployed on AWS using tools like CloudFormation templates to automate cluster creation and management for high availability and scalability.
This document provides an overview of Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It describes Kubernetes' architecture including nodes, pods, replication controllers, services, and networking. It also discusses how to set up Kubernetes environments using Minikube or kubeadm and get started deploying pods and services.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally developed by Google based on years of experience running production workloads at scale. Kubernetes groups containers into logical units called pods and handles tasks like scheduling, health checking, scaling and rollbacks. The main components include a master node that manages the cluster and worker nodes that run application containers scheduled by the master.
This document provides an introduction to Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It first reviews what Docker is and its features like isolation and compatibility across platforms. It then explains that container orchestration is needed to manage thousands of containers across a cluster, ensure efficient resource use, and automate container lifecycles. Kubernetes is recommended because it is actively developed by major companies, makes scheduling and managing workloads easy through features like rolling updates, and has many extensions available.
A brief study on Kubernetes and its componentsRamit Surana
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
- Archeology: before and without Kubernetes
- Deployment: kube-up, DCOS, GKE
- Core Architecture: the apiserver, the kubelet and the scheduler
- Compute Model: the pod, the service and the controller
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
In this session, we will discuss the architecture of a Kubernetes cluster. we will go through all the master and worker components of a kubernetes cluster. We will also discuss the basic terminology of Kubernetes cluster such as Pods, Deployments, Service etc. We will also cover networking inside Kuberneets. In the end, we will discuss options available for the setup of a Kubernetes cluster.
This document discusses microservices and containers, and how Kubernetes can be used for container orchestration. It begins with an overview of microservices and the benefits of breaking monolithic applications into independent microservices. It then discusses how containers can be used to package and deploy microservices, and introduces Docker as a container platform. Finally, it explains that as container usage grows, an orchestrator like Kubernetes is needed to manage multiple containers and microservices, and provides a high-level overview of Kubernetes' architecture and capabilities for scheduling, self-healing, scaling, and other management of containerized applications.
1. Minikube allows users to run a single-node Kubernetes cluster locally for development and testing. It provisions and manages a virtual machine running Kubernetes.
2. The steps to set up Minikube on Windows are to install Oracle VirtualBox, enable virtualization in BIOS, install Minikube and Kubectl, and start the Minikube cluster.
3. Some basic operations with Minikube include starting and stopping the cluster, getting the cluster status and IP, creating deployments and exposing services, and checking pod statuses. The Kubernetes dashboard is also available.
This document introduces Docker Compose, which allows defining and running multi-container Docker applications. It discusses that Docker Compose uses a YAML file to configure and run multi-service Docker apps. The 3 steps are to define services in a Dockerfile, define the app configuration in a Compose file, and run the containers with a single command. It also covers topics like networking, environment variables, and installing Docker Compose. Hands-on labs are provided to learn Compose through examples like WordPress.
Kubernetes (commonly referred to as "K8s") is an open-source system for automating deployment, scaling and management of containerized applications It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". We will see Kubernetes architecture, use cases, basics and live demo
This document provides an overview of containers, Kubernetes, and their key concepts. It discusses how Kubernetes manages containerized applications across clusters and abstracts away infrastructure details. The main components of Kubernetes include Pods (groups of tightly-coupled containers), ReplicationControllers (manages Pod replicas), Services (expose Pods to external traffic), and Namespaces (logical isolation of clusters). Kubernetes architecture separates the control plane running on the master from the nodes that run container workloads.
Kubernetes is an open-source container orchestration system that automates deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes services handle load balancing, networking, and execution of containers across a cluster of nodes. It addresses challenges in managing containers at scale through features like deployment and rolling update of containers, self-healing, resource allocation and monitoring.
Here is something to get you started with Kubernetes Container Orchestration.
Crevise Technologies has executed several projects to containerize applications for our clients. We have engineers who have expertise in Kubernetes, Docker, Openshift, Amazon ECS etc.
Cloud technology with practical knowledgeAnshikaNigam8
Docker uses a client-server architecture with a Docker client communicating with the Docker daemon. The daemon manages Docker objects like images, containers, networks and volumes. Kubernetes is an open-source system that automates deployment, scaling, and management of containerized applications. It ensures containers run as expected and acquires necessary resources. Key Kubernetes components include pods, deployments, services, nodes, and the control plane which manages the cluster.
Container orchestration engine for automating deployment, scaling, and management of containerized applications.
What are Microservices?
What is container?
What is Containerization?
What is Docker?
This document provides an overview of Kubernetes concepts including:
- Kubernetes architecture with masters running control plane components like the API server, scheduler, and controller manager, and nodes running pods and node agents.
- Key Kubernetes objects like pods, services, deployments, statefulsets, jobs and cronjobs that define and manage workloads.
- Networking concepts like services for service discovery, and ingress for external access.
- Storage with volumes, persistentvolumes, persistentvolumeclaims and storageclasses.
- Configuration with configmaps and secrets.
- Authentication and authorization using roles, rolebindings and serviceaccounts.
It also discusses Kubernetes installation with minikube, and common networking and deployment
The document provides an overview of Azure Kubernetes Service (AKS) including:
- AKS simplifies deployment, management, scaling and monitoring of containerized applications on Kubernetes.
- AKS uses a master-worker node architecture with master nodes managing the cluster state and worker nodes running application containers.
- Key AKS concepts include clusters, pods, deployments, replica sets, and services.
- The AKS architecture includes etcd, kube-apiserver, controller manager, kube-scheduler and cloud controller manager on the master node, and kubelet, container runtime and kube-proxy on worker nodes.
- Applications can be deployed to AKS through Kubernetes manifest
Containers are changing the compute landscape and for NFVi support of Containers is key. Kubernetes is a well known Container Cluster Management software and this is slide deck from a talk given in Opendaylight Summit 2016. This slide gives an insight about Microservice architecture, Kuberentes and how it can be integrated with ODL. Session Video can be found at https://www.youtube.com/watch?v=a4_pkp2qiX8&list=PL8F5jrwEpGAiRCzJIyboA8Di3_TAjTT-2
These slides were used during a technical session for the Cloud-Native El Salvador community. It covers the basic Kubernetes components, some installers and main Kubernetes resources. For the demo, it was used the capabilites provided by the Horizontal Pod Autoscaler.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Docker allows developers to build, deploy, run, update and manage containers. Containers are more lightweight than virtual machines and allow multiple applications to run on a single operating system kernel. Kubernetes is an open source tool for managing containerized workloads and services across clusters of hosts. It provides capabilities for self-healing, auto-scaling and load balancing of containers. K3s is a lightweight Kubernetes distribution designed for resource-constrained edge and IoT environments.
This document provides an overview of Kubernetes, including its architecture, components, concepts, and configuration. It describes that Kubernetes is an open-source container orchestration system designed by Google to manage containerized applications across multiple hosts. The key components include the master nodes which run control plane components like the API server, scheduler, and controller manager, and worker nodes which run the kubelet and containers. It also explains concepts like pods, services, deployments, networking, storage, and role-based access control (RBAC).
This document provides an introduction and overview of Kubernetes. It discusses that Kubernetes is an open-source system for managing containerized applications across multiple hosts. It supports various cloud providers and container platforms. Kubernetes provides self-healing capabilities to automatically place, restart, and replicate applications. The document outlines key Kubernetes concepts like masters, minions, pods, services and labels. It provides examples of simple pod and replication controller configurations. It also gives a high-level overview of the Kubernetes architecture and components.
Docker and Azure Kubernetes service.pptxArzitPanda
This document discusses Docker and Azure Kubernetes Service (AKS). It provides an overview of containers and how Docker is a leading containerization platform. It describes how AKS uses Kubernetes for container orchestration to facilitate deployment, scaling, and management of containers across a cluster of virtual machines. Real-world use cases show how Docker and AKS can enable microservices architectures and support DevOps practices for faster software delivery.
Kubernetes provides logical abstractions for deploying and managing containerized applications across a cluster. The main concepts include pods (groups of containers), controllers that ensure desired pod states are maintained, services for exposing pods, and deployments for updating replicated pods. Kubernetes allows defining pod specifications that include containers, volumes, probes, restart policies, and more. Controllers like replica sets ensure the desired number of pod replicas are running. Services provide discovery of pods through labels and load balancing. Deployments are used to declaratively define and rollout updates to replicated applications.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Details of description part II: Describing images in practice - Tech Forum 2024
01. Kubernetes-PPT.pptx
2. What is Kubernetes?
• Kubernetes is an orchestration engine and open-source platform for managing
containerized applications.
• Responsibilities include container deployment, scaling & descaling of containers &
container load balancing.
• Actually, Kubernetes is not a replacement for Docker, But Kubernetes can be considered
as a replacement for Docker Swarm, Kubernetes is significantly more complex than
Swarm, and requires more work to deploy.
• Born in Google ,written in Go/Golang. Donated to CNCF(Cloud native computing
foundation) in 2014.
• Kubernetes v1.0 was released on July 21, 2015.
• Current stable release v1.18.0.
3. Kubernetes Features
The features of Kubernetes, are as follows:
•Automated Scheduling: Kubernetes provides advanced scheduler to launch container on cluster nodes based on their resource
requirements and other constraints, while not sacrificing availability.
•Self Healing Capabilities: Kubernetes allows to replaces and reschedules containers when nodes die. It also kills containers that don’t
respond to user-defined health check and doesn’t advertise them to clients until they are ready to serve.
•Automated rollouts & rollback: Kubernetes rolls out changes to the application or its configuration while monitoring application health
to ensure it doesn’t kill all your instances at the same time. If something goes wrong, with Kubernetes you can rollback the change.
•Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale down the application as per the requirements with a simple
command, using a UI, or automatically based on CPU usage.
4. Kubernetes Features
5. Service Discovery & Load balancing
With Kubernetes, there is no need to worry about networking and communication because Kubernetes will automatically assign IP addresses to
containers and a single DNS name for a set of containers, that can load-balance traffic inside the cluster.
Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing.
6. Storage Orchestration
With Kubernetes, you can mount the storage system of your choice. You can either opt for local storage, or choose a public cloud provider such as
GCP or AWS, or perhaps use a shared network storage system such as NFS, iSCSI, etc.
5. Container orchestration
• Container orchestration automates the deployment,management, scaling, and networking of
containers across the cluster
. It is focused on managing the life cycle of containers.
• Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and
hosts can benefit from container orchestration.
• Container orchestration is used to automate the following tasks at scale:
Configuring and scheduling of containers
Provisioning and deployment of containers
Redundancy and availability of containers
Scaling up or removing containers tospread application load evenly across host
infrastructure
Movement ofcontainers from one host to another if there is a shortage of
resources in a host, or if a host dies
Allocation of resources between containers
External exposure of services running in a container with the outside world
Load balancing of service discovery between containers
Health monitoring of containers and hosts
6. Swarm vs Kubernetes
Features Kubernetes Docker Swarm
Installation & Cluster Configuration
Installation is complicated; but once
setup, the cluster is very strong
Installation is very simple; but cluster
is not very strong
GUI GUI is the Kubernetes Dashboard There is no GUI
Scalability Highly scalable & scales fast
Highly scalable & scales 5x faster
than Kubernetes
Auto-Scaling Kubernetes can do auto-scaling
Docker Swarm cannot do auto-
scaling
Rolling Updates & Rollbacks
Can deploy Rolling updates & does
automatic Rollbacks
Can deploy Rolling updates, but not
automatic Rollbacks
Data Volumes
Can share storage volumes only with
other containers in same Pod
Can share storage volumes with any
other container
Logging & Monitoring
In-built tools for logging &
monitoring
3rd party tools like ELK should be
used for logging & monitoring
Both Kubernetes and Docker Swarm are important tools that are used to deploy containers
inside a cluster but there are subtle differences between the both
7. Kubernetes
• Kubernetes also known as K8s, is an open-source Container Management tool
• It provides a container runtime, container orchestration, container-centric
infrastructure orchestration, self-healing mechanisms, service discovery, load balancing
and container (de)scaling.
• Initially developed by Google, for managing containerized applications in a clustered
environment but later donated to CNCF
• Written in Golang
• It is a platform designed to completely manage the life cycle of containerized
applications and services using methods that provide predictability, scalability, and high
availability.
8. Kubernetes
https://www.cncf.io/certification/software-conformance/
Certified Kubernetes Distributions
• Cloud Managed:EKS byAWS, AKS by Microsoft and GKE by google
• Self Managed: OpenShift by Redhat and Docker Enterprise
• Local dev/test: Micro K8s by Canonical, Minikube
• Vanilla Kubernetes: The core Kubernetes project(baremetal), Kubeadm
• Special builds: K3s by Rancher
, a light weight K8s distribution for Edge devices
Online Emulator: https://labs.play-with-k8s.com/
9. KubernetesCluster
AKubernetes cluster is a set of physical or virtual machines and other infrastructure resources
that are needed to run your containerized applications. Each machine in a Kubernetes cluster is
called a node.
There are two types of node in each Kubernetes cluster:
Master node(s): hosts the Kubernetes control plane components and manages the cluster
Worker node(s):runs your containerized applications
worker worker
Master
12. KubernetesArchitecture
Kubernetes Master
• Master is responsible for managing the complete
cluster
.
• You can access master node via the CLI, GUI, orAPI
• Themaster watches over the nodes in the cluster and
is responsible for the actual orchestration of
containers on the worker nodes
• For achieving fault tolerance, there can be more than
one master node in the cluster
.
• It is the access point from which administrators and
other users interact with the cluster to manage the
scheduling and deployment ofcontainers.
• It has four components: ETCD,Scheduler
,Controller
andAPI Server
13. KubernetesArchitecture
Kubernetes Master
ETCD
• ETCD is a distributed reliable key-value store used by
Kubernetes to store all data used to manage the cluster
.
• When you have multiple nodes and multiple masters in
your cluster
, etcd stores all that information on all the
nodes in the cluster in a distributed manner
.
• ETCD is responsible for implementing locks within the
cluster to ensure thereareno conflicts between the
Masters
Scheduler
• The scheduler is responsible for distributing work or containers across multiple
nodes.
• It looks for newly createdcontainers and assigns them to Nodes.
14. KubernetesArchitecture
Kubernetes Master
API server manager
• Masters communicate with the rest of the cluster through
the kube-apiserver
, the main access point to the control
plane.
• It validates and executes user’s REST commands
• kube-apiserver also makes sure that configurations in etcd
match with configurations of containers deployed in the
cluster
.
Controller manager
• The controllers arethe brain behind orchestration.
• They are responsible for noticing and responding when nodes, containers or endpoints goes down.
Thecontrollers makes decisions to bring up new containers in such cases.
• The kube-controller-manager runs control loops that manage the state of the cluster by checking if
the required deployments, replicas, and nodes are running in the cluster
15. KubernetesArchitecture
Kubernetes Master
Kubectl
• kubectl is the command line utility using which we can
interact with k8s cluster
• UsesAPIs providedbyAPI server to interact.
• Also known as the kubecommand line tool or kubectl or
kubecontrol.
• Used to deploy and manage applications on a Kubernetes
• kubectl run nginx used to deploy an application on the cluster
.
• kubectl cluster-info used to view information about the cluster and the
• kubectl get nodes used to list all the nodes part of the cluster
.
16. KubernetesArchitecture
Kubernetes Worker
Kubelet
• Worker nodes have the kubelet agent that is responsible for
interacting with the master to provide health information of
the worker node
• T
ocarry out actions requested by themaster on the worker
nodes.
Kubeproxy
• The kube-proxy is responsible for ensuring network traffic is routed properly to internal and external
services as required and is based on the rules definedby network policies in kube-controller-manager
and other custom controllers.
17. Kubernetes
What is K3s?
• K3s is a fully compliant Kubernetes distribution with the following enhancements:
Packagedas a single binary
<100MB memory footprint
SupportsARM and x86 architectures
Lightweight storage backend based on sqlite3 as the default storage mechanism to replace
heavier ETCD server
Docker is replaced in favour ofcontainerd runtime
Inbuilt Ingress controller (Traefik)
19. Kubernetes
K3s Setup using VirtualBox
• Use 3VMs(1 master and 2 workers).All VMs should have bridge network adapterenabled
• Createa host only networkingadapter(DHCP disabled) and connectall VMs to it.This is to have static IPs for
allVMs in thecluster
.Make sure static IPs areconfigured in eachVM in thesame subnet range ofhost only
network
On Master
• bash -c "curl -sfL https://get.k3s.io|sh -“
• TOKEN=cat /var/lib/rancher/k3s/server/node-token
• IP = IP of masternode whereAPI server is running
On Worker nodes
• bash -c "curl -sfL https://get.k3s.io|K3S_URL="https://$IP:6443" K3S_TOKEN="$TOKEN" sh -"
https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
21. Kubernetes
Pods
Pods areoften ephemeral
• Basic scheduling unit in Kubernetes.
• Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level
structure calleda pod
• It is also the smallest deployableunit that can be created, schedule, and managed on a Kubernetes
cluster
. Each pod is assigned a unique IP address within the cluster
.
• Pods can hold multiple containers as well, but you should limit yourself when possible. Because pods
are scaled up and down as a unit, all containers in a pod must scale together
, regardless of their
individual needs. This leads to wasted resources.
Ex:nginx,mysql,
wordpress..
10.244.0.22
containers
22. Kubernetes
10.244.0.22
Pods
• Any containers in the same pod will share the same storage volumes and network resources and
communicate using localhost
• K8s uses YAML to describe the desired state of the containers in a pod.This is also called a Pod Spec.
These objects are passed to the kubelet through theAPI server
.
• Pods are used as the unit of replication in Kubernetes. If your application becomes too popular and a
single pod instance can’t carry the load, Kubernetes can be configured to deploy new replicas of your
pod to the cluster as necessary.
containers
Insidecluster
Using theexamplefromtheabovefigure, you could run curl 10.1.0.1:3000 to
communicatetotheonecontainerand curl10.1.0.1:5000 tocommunicatetothe
othercontainerfromother pods.However, ifyou wanted totalkbetween
containers - forexample, callingthetopcontainerfromthebottomone,you could
usehttp://localhost:3000.
23. Kubernetes
Scaling Pods
• All containers within the pod getscaled together
.
• You cannot scale individual containers within the pods. Thepod is the unit of scale in K8s.
• Recommended way is to have only one container per pod. Multi container pods are very rare.
• In K8s, initcontaineris sometimes used as a second container inside pod.
initcontainers are exactlylike regular containers,exceptthat theyalways run to completion.Each initcontainermust completesuccessfullybeforethenextone starts.
If a Pod’s init containerfails,KubernetesrepeatedlyrestartsthePod untiltheinitcontainersucceeds
24. Kubernetes
Imperative vs Declarative commands
• KubernetesAPI defines a lot of objects/resources, such as namespaces, pods, deployments,
services, secrets, config maps etc.
• There are two basic ways to deploy objects in Kubernetes: Imperatively and Declaratively
Imperatively
• Involves using any of the verb-based commands like kubectl run, kubectl create,kubectl
expose,kubectl delete,kubectl scale and kubectl edit
• Suitable for testing and interactive experimentation
Declaratively
• Objects are written in YAML files and deployed using kubectl create or kubectl apply
• Best suited for production environments
25. Kubernetes
Manifest /Spec file
• K8s objectconfiguration files - Written in YAML or JSON
• They describe the desired state of your application in terms of KubernetesAPI objects.Afile
can include one or moreAPI object descriptions (manifests).
# manifest filetemplate
apiVersion - version of the KubernetesAPI
used to create the object
kind - kind of objectbeing created
metadata - Data that helps uniquely identify
the object,including a name and
optional namespace
spec - configuration that defines the desired for
the object
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
---
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
Multiple
resource
definitions
26. Kubernetes
Manifest files Man Pages
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
---
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
Multiple
resource
definitions
List all K8s API supported Objects and Versions
kubectl api-resources
kubectl api-versions
Man pages for objects
kubectl explain <object>.<option>
kubectl explain pod
kubectl explain pod.apiVersion
kubectl explain pod.spec
28. Kubernetes
Once the cluster is setup…
kubectl cluster-info
kubectlcluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state
29. Kubernetes
Creating Pods
kubectl run <pod-name> --image <image-name>
kubectl run nginx --image nginx --dry-run=client
kubectl run nginx --image nginx --dry-run=client –o yaml
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
dry-run doesn’trun the
commandbutwillshow what
the changes the command
woulddo tothe cluster
shows thecommandoutputin
YAML. Shortcuttocreatea
declarative yamlfrom
imperative commands
30. Kubernetes
Creating Pods: Imperative way
kubectl run test --image nginx --port 80 - Also exposes port 80 of container
kubectl getpods – o wide
kubectl describe pod test – display extended information of pod
container
10.244.2.2
34. Kubernetes
Replication Controller
• Asingle pod may not besufficient to handle the user traffic.Also if this only pod goes
down because of a failure, K8s will not bring this pod up again automatically
• In order to prevent this, we would like to have more than one instance or POD running at
the same time inside the cluster
• Kubernetes supports different controllers(Replicacontroller &ReplicaSet) to handle
multiple instances of a pod. Ex: 3 replicas of nginx webserver
• Replication Controller ensures high availability by replacing the unhealthy/deadpods with
a new one to ensure required replicas are always running inside a cluster
• So, does that mean you can’t use a replication controller if you plan to have a single POD?
No! Even if you have a single POD, the replication controller can help by automatically
bringing up a new POD when the existing one fails.
• Another reason we need replication controller is to create multiple PODs to share the load
across them.
• Replica controller is deprecated and replaced by Replicaset
35. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3
ETCD
API Server
Scheduler Controller
Kubelet
When a command is given
through kubectl
Kubelet
36. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3
ETCD
API Server
Scheduler Controller
Kubelet
API Serverupdates the
deployment details in ETCD
Kubelet
deployment
37. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3
ETCD
API Server
Scheduler Controller
Kubelet
Controller managerthrough
API Server identifies its
workload and createsa
ReplicaSet
Kubelet
deployment
replicaset
38. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
ReplicaSet createsrequired
numberof pods andupdates
the ETCD.
Note thestatus of pods.They
are still in PENDINGstate
Kubelet
deployment
replicaset
pod PENDING
ETCD
pod PENDING
pod PENDING
39. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Scheduler identifiesits workload
through API-Server and decides
the nodes onto which the podare
to bescheduled.
At this stage,pods areassigned
to a node
Kubelet
deployment
replicaset
pod slave01
ETCD
pod slave02
pod slave01
40. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Kubeletidentifiesits workload
through API-Server and
understands thatit needs to
deploy somepods on its node
Kubelet
deployment
replicaset
pod slave01
ETCD
pod slave02
pod slave01
41. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Kubeletinstructs the docker
daemon to createthe pods.At
the same timeit updates the
status as ‘Pods CREATING’ in
ETCD throughAPI Server
Kubelet
deployment
replicaset
pod CREATING
ETCD
pod CREATING
pod CREATING
Pod Pod
Pod
42. Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Once pods are created and run,
Kubelet updates the pod status
as RUNNING in ETCD through
API Server
Kubelet
deployment
replicaset
pod RUNNING
ETCD
pod RUNNING
pod RUNNING
Pod Pod
Pod
43. Kubernetes
Labels and Selectors
Labels
• Labels are key/value pairs that are attached to objects,
such as pods
• Labels allows to logically group certain objects by giving
various names to them
• You can label pods, services, deployments and
even nodes
kubectl get pods -l environment=production
kubectl getpods -l environment=production,
tier=frontend
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
45. Kubernetes
Labels and Selectors
Selectors
• Selectors allows to filter the objects based on labels
• TheAPI currently supports two types of selectors: equality-based and set-based
• Alabel selector can be made of multiple requirements which are comma-separated
Equality-based Selector
• Equality- or inequality-based requirements allow
filtering by label keys and values.
• Threekinds of operators are admitted =,==,!=
Used by Replication Controllers and Services
46. Kubernetes
Labels and Selectors
Selectors
• Selectors allows to filter the objects based on labels
• TheAPI currently supports two types of selectors: equality-based and set-based
• Alabel selector can be made of multiple requirements which are comma-separated
Set-based Selector
• Set-based label requirements allow filtering
keys according to a set ofvalues.
• Threekinds of operators are supported: in,notin
and exists (only the key identifier).
kubectlget pods -l 'environment in (production, qa)'
Used by ReplicaSets, Deployments, DaemonSets
47. Kubernetes
ReplicaSet
• ReplicaSets are a higher-levelAPI that gives the ability to easily run multiple instances of a
given pod
• ReplicaSets ensures that the exact number of pods(replicas) are always running in the
cluster by replacing any failed pods with new ones
• Thereplica count is controlled by the replicas field in the resource definition file
• Replicaset uses set-based selectors whereas replicacontroller uses equality based selectors
51. Kubernetes
ReplicaSet
• kubectl edit replicaset <replicaset-name> - edit a replicaset; like image, replicas
• kubectl deletereplicaset <replicaset-name> - deletea replicaset; like image, replicas
• kubectl delete-freplica-set.yml
• kubectl get all - get pods, replicasets,deployments, services all in one shot
• kubectl replace -f replicaset-definition.yml -replaces the pods with updated
definition file
• kubectl scale -–replicas=6 –f replicaset-definition.yml – scale using definition file
• kubectl scale -–replicas=6 replicaset <replicaset-name> - using name of
replicaset
53. Kubernetes
Deployment
• ADeployment provides declarative updates for Pods and ReplicaSets.
• You describe a desired state in a Deployment, and the Deployment Controller changes the actual
state to the desired state at a controlled rate.
• It seems similar to ReplicaSets but with advanced functions
• Deployment is the recommended way to deploy a pod or RS
• By default Kubernetes performs deployments in rolling update strategy.
• Below aresome of the key features of deployment:
Easily deploy a RS
Rolling updates pods
Rollback to previous deployment versions
Scale deployment
Pause and resume deployment
54. Kubernetes
Deployment Strategy
• Whenever wecreate a new deployment,K8s triggers a Rollout.
• Rollout is the process of gradually deploying or upgrading your application containers.
• For every rollout/upgrade, a version history will be created, which helps in rolling back to working
version in case of an update failure
• In Kubernetes there are a few different ways to release updates to an application
• Recreate: terminate the old version and release the new one.Application experiences downtime.
• RollingUpdate: release a new version on a rolling update fashion, one after the other
. It’s the
default strategy in K8s. No application downtime is required.
• Blue/green: release a new version alongside the old version then switch traffic
spec:
replicas: 10
strategy:
type:RollingUpdate
rollingUpdate:
maxSurge:2
maxUnavailable:0
spec:
replicas: 10
strategy:
type:Recreate
55. Kubernetes
Rolling Update Strategy
• By default,deploymentensures that only 25% of your pods are unavailable during an update and does not
updatemore that 25%of thepods at a given time
• It does not kill old pods until/unless enough new pods comeup
• It does not create new pods until a sufficientnumberof old pods are killed
• Thereare two settings you can tweak to controltheprocess: maxUnavailableand maxSurge.Both havethe
defaultvalues set - 25%
• ThemaxUnavailablesetting specifies the maximum numberof pods that can beunavailableduring therollout
process.You can set it to an actualnumber(integer)or a percentageof desired pods
Let’ssay maxUnavailable is setto40%.When theupdatestarts,theoldReplicaSetis scaleddownto60%.
As soon as new podsarestartedand ready
,theoldReplicaSetis scaleddownagainand thenewReplicaSetis scaledup.This
happensinsuch a waythatthetotalnumber ofavailablepods(oldand new
,since wearescaling upand down) is alwaysatleast
60%.
• ThemaxSurge setting specifies the maximum numberof pods that can becreatedoverthe desired numberof
pods
If we usethesame percentage as before(40%), thenew ReplicaSetis scaledup right awaywhen therolloutstarts.Thenew
ReplicaSetwillbescaledupinsuch a waythatitdoesnot exceed140% ofdesiredpods.As oldpods getkilled,thenew ReplicaSet
scales upagain,making sure itnevergoes over the140% ofdesiredpods
56. Kubernetes
Deployments
• kubectl create deployment nginx --imagenginx --dry-run -o yaml
• kubectl create -f deployment.yml --record (--record is optional, it just
records theevents in thedeployment)
• kubectl get deployments
apiVersion:apps/v1
kind:Deployment
metadata:
name: nginx-deployment
labels:
app:nginx
spec:
replicas: 10
selector:
matchLabels:
app:nginx
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx-container
image:nginx
ports:
- containerPort:80
61. Kubernetes
Namespaces
Namespaces are Kubernetes objects which partition a single Kubernetes cluster into multiple
virtual clusters
• Kubernetes clusters can managelargenumbers of unrelated
workloads concurrently and organizationsoften choose to deploy
projects createdby separateteamsto shared clusters.
• With multiple deployments in a single cluster
, thereare high
chancesof deleting deployments belong to deffprohjects.
• So namespacesallow you to group objects togetherso you can
filter andcontrolthemas a unit/group.
• Namespacesprovide a scope for names. Names of resources need
to beunique within a namespace, but not across namespaces.
• So each Kubernetes namespaceprovides the scope for Kubernetes
Names itcontains;which means that using the combinationof an
objectnameand a Namespace,each objectgets a unique identity
across the cluster
62. Kubernetes
Namespaces
By default, a Kubernetes cluster is created with the
following threenamespaces:
• default: It’s a default namespace for users. By default,
all the resource created in Kubernetes cluster are
created in the default namespace
• Kube-system: It is the Namespace for objects created
by Kubernetes systems/control plane.Any changes to
objects in this namespace would cause irreparable
damageto the cluster itself
• kube-public: Namespace for resources that are publicly
readable by all users. This namespace is generally
reserved for cluster usage like Configmaps and Secrets
63. Kubernetes
kubectl get all -n kube-system (lists available objects under a specific namespace)
Namespaces
kubectl getnamespaces
kubectl get all --all-namespaces (lists available objects under all available namespaces)
64. Kubernetes
Namespaces
Create a namespace
kubectl create ns dev # Namespace for Developer team
kubectl create ns qa # Namespace for QAteam
kubectl create ns production # Namespace for Production team
Deploy objects in a namespace
kubectl run nginx --image=nginx -n dev
kubectl get pod/nginx – n dev
kubectl apply --namespace=qa -f pod.yaml
Deletea namespace
kubectl deletens production
66. Kubernetes
Services
• Services logically connect pods across the cluster to enable networking between them
• Thelifetime of an individual pod cannot be relied upon; everything from their IP addresses to
their very existence are prone to change.
• Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an
issue and dies, it’s Kubernetes’jobto replace it so that the application doesn’t experience any
downtime
• Services makes sure that even after a pod(back-end) dies because of a failure, the newly
created pods will be reached by its dependencypods(front-end) via services. In this case,
front-end applications always find the backend applications via a simple service(using service
name or IP address) irrespective of their location in the cluster
• Services point to pods directly using labels. Services do not point to deployments or
ReplicaSets. So, all pods with the same label gets attached to same service
• 3 types: ClusterIP
, NodePort and LoadBalancer
67. Kubernetes
Pods’lifecycle are erratic; they
comeand go by Kubernetes’will.
Not healthy? Killed.
Not in theright place?Cloned,
andkilled.
So how can you send a request to
your application if you can’t know
for sure whereit lives?
Theanswerlies in services.
Services are tied to the pods using
pod labels and provides a stable
end point for the users to reachthe
application.
When requesting your application,you don’t care about its location or about
which pod answers the request.
Services
68. • ClusterIP service is the default Kubernetes service.
• It gives you a service inside your cluster that other apps inside
your cluster can access
• It restricts access to the application within the cluster itself
and no external access
• Useful when a front-end app wants to communicate with
back-end
• Each ClusterIP service gets a unique IP address inside the
cluster
• Similar to --links in Docker
Kubernete
s Services
ClusterIP
Services point to pods directly using labels!!!
https://kubernetes.io/docs/concepts/services-networking/service/
69. Kubernetes
When services are not available
• Imagine 2 pods on 2
separate nodes node-1 &
node-2 with their local IP
address
• pod-nginx can ping and connect
to pod-python using its internal
IP 1.1.1.3.
70. Kubernetes
When services are not available
• Now let’s imagine the pod-python
dies and a new one is created.
• Now pod-nginx cannot reach pod-
python on 1.1.1.3 because its IP is
changed to 1.1.1.5.
How do we remove this dependency?
71. Kubernetes
Enter services…
• Services logically connects pods
together
• Unlike pods, a serviceis not scheduled
on a specific node. It spans across the
cluster
• Pod-nginx can always safely connect to
pod-python using serviceIP 1.1.10.1 or
the DNS name of service(service-
python)
• Even if the python pod gets deleted and
recreated again, nginx pod can still
reach python pod using the service but
not with IP of python pod directly
76. Kubernetes
Services
NodePort
• NodePort opens a specific port on all the Nodes in
the cluster and forwards any traffic that is received
on this port to internal services
• Useful when front end pods areto beexposedoutside
the cluster for users to access it
• NodePort is build on top of ClusterIP service by
exposing the ClusterIP service outside of the cluster
• NodePort must be within the port range30000-
32767
• If you don’t specify this port, a random port will be
assigned. It is recommended to let k8s auto assign
this port
86. Kubernetes
NodePort Limitations
• In NodePort service, users can access application using
the URL http://<node-ip>:<node-port>
• In Production environment, we do not want the users to
have to type in the IP address every timeto access the
application
• So we configure a DNS server to point to the IP of the
nodes.Users can now access the application using the
URL http://xyz.com:30001
• Now, we don’t want the user to have to remember port
number either
.
• However
, NodePort service can only allocate high
numbered ports which are greater than 30,000.
• So we deploy a proxy server between the DNS server
and the cluster thatproxies requests on port 80 to port
30001 on the nodes.
• We then point the DNS to proxy server’s IP
, and users
can now access the application by simply visiting
http://xyz.com
spec:
type:NodePort
ports:
- port:80
targetPort:5000
nodePort:30001
protocol:TCP
NodePort30001 is being used onlyfor demo.Youcan configurethis port
numberin servicemanifestfileor letK8s autoassignfor you.
87. Kubernetes
Services
Load Balancer
• ALoadBalancer service is the standard way to exposea
Kubernetes service to the internet
• On GKE(Google Kubernetes Engine), this will spin up a Network
Load Balancer that will give you a single IP address that will
forward all external traffic to your service
• All traffic on the port you specify will be forwarded to the service
• There is no filtering, no routing, etc.This means you can send
almost any kind of traffic to it, like HTTP
, TCP
, UDP or
WebSocket's
• Few limitations with LoadBalancer:
Every service exposed will gets it's own IP address
It gets very expensive to have external IP for each of the
service(application)
https://rancher.com/blog/2018/2018-06-08-load-balancing-user-apps-with-rancher/
88. Kubernetes
Services
Load Balancer
• On GoogleCloud,AWS, orAzure, a service type of
LoadBalancer in the service manifest file will immediately
run an Elastic /CloudLoad Balancer that assigns externally
IP (public IP) to your application
• But for on-prem or bare-metal k8s clusters,this
functionality is not available
• Using service type as LoadBalancer on bare-metal will not
assign any external IP and service resource will remain in
Pending state forever
https://collabnix.com/3-node-kubernetes-cluster-on-bare-metal-system-in-5-minutes/
spec:
type:LoadBalancer
selector:
app:hello
ports:
- port:80
targetPort:8080
protocol:TCP
90. External IP
10.12.16.22
GCP Load Balancer
Kubernetes
GCP LoadBalancer
www.flask-app.com
DNS Server
GoogleKubernetes Engine (GKE)
Few limitations with LoadBalancer
• Every service exposed will gets it's
own public IP address
• It gets very expensive to have public
IP for each of the service
www.flask-app.com
91. Kubernetes
GCP LoadBalancer Cons
Kubernetes Cluster
connected-city service connected-factoryservice connected-tools service
connected-city pods connected-factory pods connected-tools pods
www.connected-city.com
www.connected-factory.com
www.connected-tools.com
External IP
10.12.16.22
GCP Load Balancer
External IP
10.12.16.23
External IP
10.12.16.24
Public IP =
92. Kubernetes
LoadBalancer
• Application can bereached using the
external IP assigned by the LoadBalancer
• TheLoadBalancer will forward the traffic
to the available nodes in the cluster on
the nodePort assigned to the service
94. Kubernetes
Cloud LoadBalancer: Cons
• Every service exposedwillgets it's own IP address
• It gets very expensiveto have external IP for each
of the service(application)
• Weseetwo LoadBalancers, each having its own IP
.
If we send a request to LoadBalancer 22.33.44.55
it gets redirectedto our internal service-nginx. If we
send the request to 77.66.55.44 it gets redirected
to our internal service-python.
• This works great! But IP addresses arerareand
LoadBalancer pricing depends on the cloud
providers. Now imaginewe don’t have just two but
many more internal services for which we would
like to create LoadBalancers, costs would scale up.
• Might there beanother solution which allows us to
only use one LoadBalancer (with one IP) but still
reach both of our internal services directly?.
95. Kubernetes
LoadBalancer Vs Ingress
• PublicIPs aren’t cheap
• ALB can only handlelimitedIPs
• So SSL termination
(Application Load Balancer)
• Ingress acts as internal LoadBalancer
• Routes traffic basedon URLpath
• All applications will need only one publicIP
96. Kubernetes
https://metallb.universe.tf/
Services
MetalLB Load Balancer
• MetalLB is a load-balancer
implementation for bare
metal Kubernetes clusters.
• It allows you to create Kubernetes
services of type “LoadBalancer” in bare-
metal/on-prem clusters that don’t run
on cloud providers likeAWS, GCP
,
Azureand DigitalOcean.
97. Kubernetes
Ingress Resource(rules)
• With cloud LoadBalancers, we need to pay for each of the
service that is exposed using LoadBalancer as the service type.
As services grow in number
, complexity to manage SSLs,
Scaling,Auth etc., also increase
• Ingress allows us to manage all of the above within the
Kubernetes cluster with a definition file, that lives along with
the rest of your application deployment files
• Ingress controller can perform load balancing,Auth, SSL and
URL/Path based routing configurations by being inside the
cluster living as a Deployment or a DaemonSet
• Ingress helps users access the application using a single
externally accessible URL,that you can configure to route to
different services within your cluster based on the URL path, at
the same timeterminate SSL/TLS
www.smartfactory.com
www.smartcity
.com
service1
service2
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
98. Kubernetes
Why SSL Termination atLoadBalancer?
• SSL termination/offloading represents the end or termination point of an SSL connection
• SSL termination at LoadBalancer decrypts and verifies data on the load balancer instead of
the application server
. Unencrypted traffic is sent between the load balancer and the backend
servers
• It is desired because decryption is resource and CPU intensive
• Putting the decryption burden on the load balancer enables the server to spend processing
power on application tasks, which helps improve performance
• It also simplifies the management of SSL certificates
99. Kubernetes
Ingress Controller
• Ingress resources cannot do anything on their own. We need to have an Ingress controller in
order for the Ingress resources to work
• Ingress controller implements rules defined by ingress resources
• Ingress controllers doesn’t come with standard Kubernetes binary, they have to bedeployed
separately
• Kubernetes currently supports and maintains GCE and nginx ingress controllers
• Other popular controllers include T
raefik, HAProxy ingress, istio,Ambassador etc.,
• Ingress controllers are to be exposed outside the cluster using NodePort or with a Cloud Native
LoadBalancer.
• Ingress is the most useful if you want to expose multiple services under the same IP address
• Ingress controller can perform load balancing,Auth, SSL and URL/Path based routing
configurations by being inside the cluster living as a Deployment or a DaemonSet
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
101. Kubernetes
Nginx Ingress Controller
• Ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer
• Officially maintained by Kubernetes community
• Routes requests to services based on the request host or path, centralizing a number of
services into a single entrypoint.
Ex:www.mysite.com or www.mysite.com/stats
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-
0.32.0/deploy/static/provider/baremetal/deploy.yaml
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#bare-metal
Deploy Nginx Ingress Controller
www.smartfactory.com
www.smartcity
.com
service1
service2
112. Kubernetes
Volumes
• By default, container data is stored inside own its file system
• Containers are ephemeral in nature. When they are destroyed, the data inside them gets deleted
• Also when running multiple containers in a Pod it is often necessary to share files between those
Containers
• In order to persist data beyond the lifecycle of pod, Kubernetes provide volumes
• Avolume can be thought of as a directory which is accessible to the containers in a pod
• The medium backing a volume and its contents are determined by the volume type
T
ypes of Kubernetes Volumes
• There are different types of volumes you can use in a Kubernetes pod:
Node-local memory (emptyDir and hostPath)
Cloud volumes (e.g., awsElasticBlockStore, gcePersistentDisk, and azureDiskVolume)
File-sharing volumes, such as Network File System (NFS)
Distributed-file systems (e.g., CephFS and GlusterFS)
Special volume types such as PersistentVolumeClaim, secret, configmap and gitRepo
10.244.0.22
Pod
containers
113. Kubernetes
emptyDir
• emptyDir volume is first createdwhen a Pod is assigned to a
Node
• It is initially empty and has same lifetimeof a pod
• emptyDir volumes arestored on whatever medium is backing
the node - that might bedisk or SSD or network storage or
RAM
• Containers in the Pod can all read and write the same files in
the emptyDir volume
• This volume can bemounted at the same or different paths in
each Container
• When a Pod is removed from a node for any reason, the data
in the emptyDir is deleted forever
• Mainly used to store cache or temporary data to beprocessed 10.244.0.22
container1
container2
emptyDir
115. Kubernetes
hostPath
• This type of volume mounts a file or directory from the host
node’s filesystem into your pod
• hostPath directory refers to directory created on Node where
pod is running
• Use it with caution because when pods are scheduled on
multiple nodes, each nodes get its own hostPath storage
volume. These may not bein sync with each other and
different pods might beusing a different data
• Let’s say the pod with hostPath configuration is deployed on
Worker node 2.Then host refers to worker node2. So any
hostPath location mentioned in manifest file refers to worker
node 2 only
• When node becomes unstable, the pods might fail to access
the hostPath directory and eventually gets terminated
container2
/data
10.244.0.22
container1
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Kubernetes Node
Kubernetescluster
117. Kubernetes
Persistent Volume and Persistent Volume Claim
• Managing storage is a distinct problem inside a cluster
. You cannot rely on emptyDir or hostPath for
persistent data.
• Also providing a cloud volume like EBS, AzureDisk often tends to be complex because of complex
configuration options to be followed for each service provider
• T
oovercome this, PersistentVolume subsystem provides anAPI for users and administrators that
abstracts details of how storage is provided from how it is consumed. T
odo this, K8s offers twoAPI
resources: PersistentVolume and PersistentVolumeClaim.
118. Kubernetes
Persistent Volume and Persistent Volume Claim
Persistentvolume (PV)
• APersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an
administrator or dynamically provisioned using Storage Classes(pre-defined provisioners and
parameters to create a Persistent Volume)
• Admin creates a pool of PVs for the users to choose from
• It is a cluster-wide resource used to store/persist data beyond the lifetime of a pod
• PV is not backed by locally-attached storage on a worker node but by networked storage system such
as Cloud providers storage or NFS or a distributed filesystem like Ceph or GlusterFS
• Persistent Volumes provide a file system that can be mounted to the cluster
, without being associated
with any particular node
119. Kubernetes
https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
Persistent Volume and Persistent Volume Claim
Persistent Volume Claim (PVC)
• In orderto use a PV, user needto first claim it using a PVC
• PVC requests a PV with the desired specification(size, speed,etc.) from Kubernetesand then binds it to a
resource(pod, deployment…) as a volume mount
• User doesn’t need to know the underlying provisioning.Theclaims must becreatedin the same namespace
wherethe pod is created.
120. Kubernetes
Persistent Volume and Persistent Volume Claim
https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
122. Kubernetes
Using PVCs in GKE
https://medium.com/google-cloud/introduction-to-docker-and-kubernetes-on-gcp-with-hands-on-configuration-part-3-kubernetes-with-eb41f5fc18ae
123. Kubernetes
Logs
kubectllogs my-pod
kubectllogs -l name=myLabel
kubectllogs my-pod -c my-container
kubectllogs -l name=myLabel -c my-container
kubectllogs -f my-pod
kubectllogs -f my-pod -c my-container
kubectllogs -f -l name=myLabel --all-containers
kubectllogs my-pod -f --tail=1
kubectllogs deploy/<deployment-name>
# dump pod logs (stdout)
# dump pod logs, with labelname=myLabel(stdout)
# dump pod containerlogs (stdout, multi-containercase)
# dump pod logs, with labelname=myLabel (stdout)
# streampod logs (stdout)
# streampodcontainer logs (stdout, multi-containercase)
# stream all pods logs with labelname=myLabel (stdout)
# streamlast line of pod logs
# dump deployment logs
124. Kubernetes
Interaction with pods
kubectlrun -i --tty busybox--image=busybox -- sh
kubectlrun nginx –it --image=nginx -- bash
kubectlrun nginx --image=nginx --dry-run -o yaml > pod.yaml
kubectlattachmy-pod -i
kubectlexecmy-pod -- ls /
kubectlexecmy-pod -c my-container-- ls /
# Run pod as interactive shell
# Run pod nginx
# Run pod nginx and writeits specinto a filecalled
pod.yaml
# Attach to Running Container
# Run command in existing pod (1 container case)
# Run command in existing pod (multi-container case)
126. Kubernetes
Scheduling
• Kubernetes users normally don’t need to choose a node to which their Pods should bescheduled
• Instead, the selection of the appropriate node(s) is automatically handled by the Kubernetes scheduler
• Automatic node selection prevents users from selecting unhealthy nodes or nodes with a shortage of
resources
• However
, sometimes manual scheduling is needed to ensure that certain pods only scheduled on nodes
with specialized hardware like SSD storages, or to co-locate services that communicate
frequently(availability zones), or to dedicate a set of nodes to a particular set of users
• Kubernetes offers several ways to manual schedule the pods. In all the cases, the recommended
approach is to use label selectors to make the selection
• Manual scheduling options include:
1. nodeName
2. nodeSelector
3. Node affinity
4. T
aints and T
olerations
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
127. • nodeName
• nodeName is a field of PodSpec
• nodeName is the simplest form of node selection constraint, but due to its limitations it is typically not
used
• When scheduler finds no nodeName property
, it automatically adds this and assigns the pod to any
available node
• Manually assign a pod to a node by writing the nodeName property with the desired node name.
• We can also schedule pods on Master by this method
• Some of the limitations of using nodeName to select nodes are:
If the named node does not exist, the pod will not be run, and in some cases may be
automatically deleted
If the named node does not have the resources to accommodate the pod, the pod will fail and its
reason will indicate why, for example OutOfmemory or OutOfcpu
Node names in cloud environments are not always predictable or stable
Kubernetes
Scheduling
129. Kubernetes
Scheduling
node Selector
• nodeSelector is a field of PodSpec
• It is the simplest recommended form of node selection constraint
• It uses labels(key-value pairs) to select matching nodes onto which
pods can bescheduled
• Disadvantage with nodeSelector is it uses hard preferences i.e.,if
matching nodes are not available pods remain in pending state!
check default node labels
kubectl describe node <node-name>
132. Kubernetes
Scheduling
node Affinity
Node affinityis specifiedas fieldnodeAffinityin PodSpec
• Node affinity is conceptually similar to nodeSelector – itallows youtomanually schedule pods basedon
labels on thenode.But ithas fewkeyenhancements:
• nodeAffinityimplementationis more expressive.Thelanguage offers more matching rules
besides exactmatches createdwith a logicalAND operationin nodeSelector
• Rules are soft preferencesrather than hard requirements, so ifthescheduler can’tfinda
node with matching labels, thepodwillstill bescheduled on other nodes
Therearecurrently two types ofnode affinityrules:
1. requiredDuringSchedulingIgnoredDuringExecution: Hard requirementlikenodeSelector
.No
matching nodelabel,no podscheduling!
2. preferredDuringSchedulingIgnoredDuringExecution: Soft requirement. No matching node label,podgets scheduled
on other nodes!
TheIgnoredDuringExecution partindicates that if labels on a nodechange atruntimesuch that the
affinityrules on a podareno longer met,thepodwill still continue torun on thenode.
134. Kubernetes
Scheduling
T
aints and T
olerations
• Node affinity
, is a property of Pods that attracts them to a set of nodes (either as a preferenceor a
hard requirement)
• Taints are the opposite – they allow a node to repel a set of pods.
• T
aints areapplied to nodes(lock)
• T
olerations areappliedto pods(keys)
• In short, pod should tolerate node’s taint in order to run in it. It’s like having a correct key with pod to
unlock the node to enter it
• Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes
Pod
K8s Node
135. Kubernetes
Scheduling
T
aints and T
olerations
• By default, Master node is tainted. So you cannot deploy any pods on Master
• T
o check taints applied on any node use kubectl describe node <node-name>
Apply taint to nodes
kubectl taint nodes <node-name> key=value:<taint-effect>
• taint’s key and value can be any arbitrary string
• taint effectshould be one of the supported taint effects such as
1. NoSchedule: no pod will be able to schedule onto node unless it has a matching
toleration.
2. PreferNoSchedule: soft version of NoSchedule. The system will try to avoid placing a
pod that does not tolerate the taint on the node, but it is not required
3. NoExecute: node controller will immediately evict all Pods without the matching
toleration from the node, and new pods will not be scheduled onto the node
136. Kubernetes
Scheduling
T
aints and T
olerations
Apply taint to nodes
kubectl taint nodes k8s-slave01 env=stag:NoSchedule
• In the above case, node k8s-slave01 is tained with label env=stag and taint effectas
NoSchedule. Only pods that matches this taint will be scheduled onto this node
Check taints on nodes
kubectl describenode k8s-slave01 |grep -i taint
137. Kubernetes
Scheduling
T
aints and T
olerations
Apply tolerations to pods
kubectl apply – f taint_toleration.yml
kubectl get pods – o wide
apiVersion: apps/v1
kind: Deployment
metadata:
name:nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app:myapp
template:
metadata:
name:myapp-pod
labels:
app:myapp
spec:
containers:
-name:nginx-container
image:nginx
tolerations:
-key:"env"
operator: "Equal"
value: "stag"
effect: "NoSchedule“
taint_toleration.yml
• Here pods are scheduled onto both the slave nodes.
• Only slave01 is taintedhere and matchingtolerations are added
to pods. So pods are scheduled onto slave-01 as well.
• If we removethe tolerations from the pods and deploy them,
they will get scheduled onto slave-02 only as slave01 is tainted
and matching toleration is removed/notavailable with pods!