This document outlines an agenda for a workshop on Kubernetes networking with eBPF and Cilium. The workshop covers various topics including principles of eBPF and Cilium, Kubernetes networking, cluster mesh, security, observability, service mesh, and Tetragon. It provides overviews and examples for each topic. The workshop is presented by Raphaël Pinson who works on Cilium at Isovalent.
This document summarizes a presentation about Cilium and eBPF. Cilium provides cloud native networking and security using eBPF. eBPF allows programs to run securely in the Linux kernel for networking, security, and observability. Cilium offers networking features like Kubernetes services, cluster mesh for multi-cluster connectivity, and platform integration. It also provides security using identity-based policies and API authorization. Observability features include flow visibility and service maps. Cilium can be used as a service mesh or with Tetragon for prevention capabilities without proxies.
eBPF (extended Berkeley Packet Filter) is a powerful and versatile technology that can be used to extend observability in Linux systems. In this talk, we will explore how eBPF can be used to bridge the gap between dev and ops by providing a deeper understanding of the kernel and OS internals as well as the applications running on top. We will discuss how eBPF can be used to extend observability downwards by enabling access to low-level system information and how it can be used to extend observability upwards by providing application-level tracing capabilities.
This document provides an introduction to eBPF and XDP. It discusses the history of BPF and how it evolved into eBPF. Key aspects of eBPF covered include the instruction set, JIT compilation, verifier, helper functions, and maps. XDP is introduced as a way to program the data plane using eBPF programs attached early in the receive path. Example use cases and performance benchmarks for XDP are also mentioned.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called pods. Its main components include a master node that manages the cluster and worker nodes that run the applications. It uses labels to identify pods and services and selectors to group related pods. Common concepts include deployments for updating apps, services for network access, persistent volumes for storage, and roles/bindings for access control. The deployment process involves the API server, controllers, scheduler and kubelet to reconcile the desired state and place pods on nodes from images while providing discovery and load balancing.
Surge 2014: From Clouds to Roots: root cause performance analysis at Netflix. Brendan Gregg. At Netflix, high scale and fast deployment rule. The possibilities for failure are endless, and the environment excels at handling this, regularly tested and exercised by the simian army. But, when this environment automatically works around systemic issues that aren’t root-caused, they can grow over time. This talk describes the challenge of not just handling failures of scale on the Netflix cloud, but also new approaches and tools for quickly diagnosing their root cause in an ever changing environment.
The document discusses how Cilium can accelerate Envoy and Istio by using eBPF/XDP to provide transparent acceleration of network traffic between Kubernetes pods and sidecars without any changes required to applications or Envoy. Cilium also provides features like service mesh datapath, network security policies, load balancing, and visibility/tracing capabilities. BPF/XDP in Cilium allows for transparent TCP/IP acceleration during the data phase of communications between pods and sidecars.
The document discusses Cilium and Istio with Gloo Mesh. It provides an overview of Gloo Mesh, an enterprise service mesh for multi-cluster, cross-cluster and hybrid environments based on upstream Istio. Gloo Mesh focuses on ease of use, powerful best practices built in, security, and extensibility. It allows for consistent API for multi-cluster north-south and east-west policy, team tenancy with service mesh as a service, and driving everything through GitOps.
The document provides an overview of Kubernetes concepts and architecture. It begins with an introduction to containers and microservices architecture. It then discusses what Kubernetes is and why organizations should use it. The remainder of the document outlines Kubernetes components, nodes, development processes, networking, and security measures. It provides descriptions and diagrams explaining key aspects of Kubernetes such as architecture, components like Kubelet and Kubectl, node types, and networking models.
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. At the foundation of Cilium is a new Linux kernel technology called BPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because BPF runs inside the Linux kernel itself, Cilium security policies can be applied and updated without any changes to the application code or container configuration.
This document provides an introduction to eBPF (Extended Berkeley Packet Filter), which allows running user-space code in the Linux kernel without needing to compile a kernel module. It describes how eBPF avoids unnecessary copying of packets between kernel and user-space for improved performance. Examples are given of using eBPF for networking tasks like SDN configuration, DDoS mitigation, intrusion detection, and load balancing. The document concludes by noting eBPF provides alternatives to iptables that are better suited for microservices architectures.
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Kubernetes has two simple but powerful network concepts: every Pod is connected to the same network, and Services let you talk to a Pod by name. Bryan will take you through how these concepts are implemented - Pod Networks via the Container Network Interface (CNI), Service Discovery via kube-dns and Service virtual IPs, then on to how Services are exposed to the rest of the world.
Talk held at DevOps Gathering 2019 in Bochum on 2019-03-13. Abstract: This talk will address one of the most common challenges of organizations adopting Kubernetes on a medium to large scale: how to keep cloud costs under control without babysitting each and every deployment and cluster configuration? How to operate 80+ Kubernetes clusters in a cost-efficient way for 200+ autonomous development teams? This talk provides insights on how Zalando approaches this problem with central cost optimizations (e.g. Spot), cost monitoring/alerting, active measures to reduce resource slack, and automated cluster housekeeping. We will focus on how to ingrain cost efficiency in tooling and developer workflows while balancing rigid cost control with developer convenience and without impacting availability or performance. We will show our use case running Kubernetes on AWS, but all shown tools are open source and can be applied to most other infrastructure environments.
This presentation is for Go developers and operators of Go applications who are interested in reducing costs and latency, or debugging problems such as memory leaks, infinite loops, performance regressions, etc. of such applications. We'll start with a brief description of the unique aspects of the Go runtime, and then take a look at the builtin profilers as well as Go's execution tracer. Additionally we'll look at the interoperability with popular observability tools such as Linux perf and bpftrace. After this presentation you should have a good idea of the various tools you can use, and which ones might be the most useful to you in a production environment.
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Kubernetes currently has two load balancing mode: userspace and IPTables. They both have limitation on scalability and performance. We introduced IPVS as third kube-proxy mode which scales kubernetes load balancer to support 50,000 services. Beyond that, control plane needs to be optimized in order to deploy 50,000 services. We will introduce alternative solutions and our prototypes with detailed performance data.
Come explore the World of Cilium with us! In this workshop, you'll have the opportunity to discover about Cilium and Tetragon, and the kernel technology that makes them possible, eBPF. Through a collection of hands-on labs (available at https://labs-map.isovalent.com/) and the presenter's support, you'll be able to explore many topics covering Cloud Native Networking, Security, and Observability. In this gamified approach, you'll also be able to earn badges for completing labs. Whether you're a Platform Engineer, SRE, Network Engineer, SecOps Professional, Cloud Architect, and more, you'll certainly find subjects to explore in this session!
Session at ContainerDay Security 2023 on the 8th of March in Hamburg. Cilium is the next generation, eBPF powered open-source Cloud Native Networking solution, providing security, observability, scalability, and superior performance. Cilium is an incubating project under CNCF and the leading CNI for Kubernetes. In this session we will introduce the fundamentals of Cilium Network Policies and the basics of application-aware and Identity-based Security. We will discuss the default-allow and default-deny approaches and visualize the corresponding ingress and egress connections. Using the Network Policy Editor we will be able to demonstrate how a Cilium Network Policy looks like and what they mean on a given Kubernetes cluster. Additionally, we will walk through different examples and demonstrate how application traffic can be observed with Hubble and show how you can use the Network Policy Editor to apply new Cilium Network Policies for your workloads. Finally, we’ll demonstrate how Tetragon provides eBPF-based transparent security observability combined with real-time runtime enforcement.
SDN programming and operations requires continuous monitoring of network and application state as well as consistent configuration and update of (forwarding) policies across heterogeneous devices. This is resulting in significant challenges. Multiple open protocols such as OpenFlow, OF-CONFIG, OnePK , etc. are being adopted by different vendors causing an integration problem for developers. Internet of Things applications are pushing the size and volume of data handled by SDN systems demanding more efficient and scalable protocols for information distribution and coordination of SDN devices. This presentation will describe these and other SDN challenges and ways in which various open protocols, such as DDS, XMPP, AMQP, are being used to address them.
This document summarizes an SDN and cloud computing presentation given by Affan Basalamah and Dr.-Ing. Eueung Mulyana from Institut Teknologi Bandung. It discusses SDN and cloud computing research activities at ITB, including implementing OpenFlow networks, developing SDN courses, and student projects involving OpenFlow, OpenStack, and IPsec VPNs. It also describes forming an SDN research group at ITB to facilitate collaboration between academia, network operators, and vendors on SDN topics.
This document discusses Docker container networking and publishing applications securely with Docker Enterprise. It provides an overview of key Kubernetes networking concepts like pods, services, ingress and network policies. It then details how Docker Enterprise integrates with Calico for container networking and policy-driven security. The integration provides connectivity between pods and services out of the box. It also allows enforcing network policies and zero-trust security through Calico's policy engine. The document concludes with demos of publishing sample applications using Docker Swarm services and Kubernetes ingress resources.
We present a new open source project which provides IPv6 networking for Linux Containers by generating programs for each individual container on the fly and then runs them as JITed BPF code in the kernel. By generating and compiling the code, the program is reduced to the minimally required feature set and then heavily optimised by the compiler as parameters become plain variables. The upcoming addition of the Express Data Plane (XDP) to the kernel will make this approach even more efficient as the programs will get invoked directly from the network driver.
Affan Basalamah outlines a plan to implement SDN technology at Institut Teknologi Bandung (ITB) without disrupting the production network. He discusses upgrading ITB's core, datacenter, edge, access and wireless networks to support both production and experimental SDN networks. This will allow SDN research and development activities to be conducted using the campus network infrastructure. Basalamah also describes potential SDN/NFV labs, testbeds and collaboration opportunities between universities in Indonesia.
Talk presented at Kubernetes Community Day, New York, May 2024. Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics. 1) Key patterns for Multi-cluster architectures 2) Architectural comparison of several OSS/ CNCF projects to address these patterns 3) Evolution trends for the APIs of these projects 4) Some design recommendations & guidelines for adopting/ deploying these solutions.
Intro to Cilium Microservices Security with Kubernetes Integration Open Source Cilium website: cilium.io GH: github.com/cilium/cilium Join our Slack! cilium.herokuapp.com Follow us on Twitter! @ciliumproject @_techcet_
Tungsten Fabric SDN Controller overview, Microservices Architecture, and Multi-Cloud feature overview
DevOps engineers face many challenges when running Kubernetes clusters. Operational requirements demand tools for automation, provisioning, centralized logging and monitoring, and security. Developers demand tools for CI/CD, software development, data science, and managing modern deployment strategies like canary or blue/green deployments. Commercial tools and services can help with all of these, but often come with enterprise pricing. Open source to the rescue! Fortunately, in each of these areas, open source tools provide capabilities that match or exceed the capabilities of their commercial equivalents. Furthermore, Kubernetes greatly decreases the operational expense of self-hosting these tools, when compared to using a SaaS or running on VMs or bare metal. Often the most challenging task is selecting the right tool chain among the thousands of tools available on GitHub.
Kubernetes (K8s) is a powerful, flexible and portable open source framework for distributed containerized applications delivery and management. An important part of the services provided by most Kubernetes clusters is the containers’ networking stack. In most cases and for many applications it “just works”, but this seeming simplicity is backed by a complex stack of technologies that provide many capabilities beyond the basics. This presentation accompanies the meetup and webinar where Oleg Chunikhin, CTO at Kublr, shows how Kubernetes networking stack works, describes main components, interfaces and extensibility options. What is covered: - general notions of Kubernetes networking - Pods and Network Policies - implementation of Kubernetes networking - CNI, CNI plugins, and Linux network namespaces - some Kubernetes CNI providers: Calico, Weave, Flanel, and Canal - K8S networking extensibility for advanced and “exotic” use-cases with Multus CNI plugin as an example
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
The document discusses Docker network performance testing in public clouds. It compares the performance of different Docker networking solutions (Flannel, Weave, Project Calico) to native networking performance on AWS and Azure VMs. The results show that while some Docker networks have little performance overhead, others like Weave can reduce bandwidth significantly compared to native networking. The document recommends further testing Docker network performance with real applications.
Infrastructure-related skills are essential for developers in cross-functional teams who build microservices for the cloud. Becoming proficient in infrastructure development is not just about understanding the hardware and software components on top of which applications run in the cloud. It's also about being able to use the tools that provide virtual access to this infrastructure and enable us to provision, configure, monitor it, and deploy applications to it. In this talk Gesa shares how building a Kubernetes cluster of Raspberry Pis and serving applications from it can help in acquiring fundamental infrastructure skills.
Cloud Native Compute Foundation and KubeCon 2024 - Paris Cloud Native Artifical Intelligenet (CNAI)
As the adoption of Kubernetes continues to grow, so does the need for securing containerized applications and their data. One effective security model that has gained popularity is Zero Trust Networking, which assumes that all resources, devices and users are untrusted, and access to resources is granted only after proper authentication and authorization. However, implementing Zero Trust Networking in Kubernetes can be challenging, given the dynamic nature of containerized workloads and the complexity of network policies. In this presentation, we will explore how to implement Zero Trust Networking in Kubernetes using Cilium, Hubble & Grafana. We will start by setting up Cilium on a Kubernetes cluster, which provides network security by enforcing identity-based access control policies using eBPF. Next, we will export Network Policy Verdict metrics using Hubble, which allows us to visualize network policies and track security events in real-time. Finally, we will use a Grafana dashboard to visualize these metrics and demonstrate how to secure a Kubernetes namespace without affecting existing traffic in the namespace. By the end of this presentation, attendees will have a good understanding of the importance of Zero Trust Networking in Kubernetes and how to implement it using Cilium, Hubble & Grafana. They will also learn how to secure a Kubernetes namespace and monitor network policies using a Grafana dashboard.
In this presentation, e will discuss AirWave 10, a new software build that lets us streamline code, add performance, clustering. Check out the webinar recording where this presentation was used: http://community.arubanetworks.com/t5/Network-Management/Technical-Webinar-Introduction-to-AirWave-10/td-p/454762 Register for the upcoming webinars: https://community.arubanetworks.com/t5/Training-Certification-Career/EMEA-Airheads-Webinars-Jul-Dec-2017/td-p/271908
In this presentation I talk about our motivation to converting our microservices to run on Kubernetes. I discuss many of the technical challenges we encountered along the way, including networking issues, Java issues, monitoring and alerting, and managing all of our resources!
This document provides an overview of IRATI, an open source implementation of RINA for Linux/OS. It discusses the goals of being tightly integrated with the OS, supporting existing applications, and experimentation. The high-level design uses a Linux kernel with user-space daemons. Implementation status provides details on various IPCP components and policies. Experimental activities describe designing RINA networks and interoperating with legacy technologies. Open source initiatives discuss the IRATI GitHub organization and planned contributions from projects like PRISTINE and IRINA.
eBPF is used in several cloud native security tools. In this talk we’ll dive into demos and code to explore how eBPF can be used for the next generation of security enforcement tooling. This talk will cover: - Why enforcing NetworkPolicy with eBPF has been in place for years, but preventive security for applications has taken longer. - How Phantom attacks can compromise the use of basic system call hooks. - How other eBPF attachment points, such as BPF LSM, can be used for preventive security.
eBPF (extended Berkeley Packet Filter) is a powerful and versatile technology that can be used to extend observability in Linux systems. In this talk, we will explore how eBPF can be used to bridge the gap between dev and ops by providing a deeper understanding of the kernel and OS internals as well as the applications running on top. We will discuss how eBPF can be used to extend observability downwards by enabling access to low-level system information and how it can be used to extend observability upwards by providing application-level tracing capabilities.
De KubeCon à ContainerDays, eBPF a le vent en poupe dans le monde Cloud Native. Mais de quoi s’agit-il, pourquoi cette technologie est-elle révolutionnaire, et qu’est-ce qu’elle peut m’apporter concrètement? À travers des exemples concrets appliqués aux domaines de l’observabilité, du réseau et de la sécurité, cette session explique les tenants d’eBPF et ses avantages concrets pour connecter et sécuriser les applications Cloud Native. Vous y découvrirez comment démarrer votre aventure avec eBPF, avec des outils vous permettant de bénéficier de ses super-pouvoirs en toute simplicité.
From KubeCon to ContainerDays, eBPF is trendy in the Cloud Native world. What is eBPF, and why is it revolutionary, and what can it bring to you specifically? Through concrete examples applied to observability, networking, and security, this talk will explain the principles of eBPF and its concrete advantages to connect and secure Cloud Native applications. This talk will explain what is eBPF, why it is revolutionary is several fields, give examples of tools using eBPF and what they gain from it, and open up to the future of that technology.
The document discusses technical debt and strategies for managing it over time. It advocates for loose coupling between components using techniques like immutability, microservices, and standards. This distributes technical debt across teams and helps systems evolve more gradually over time like a tortoise, rather than taking on large debt quickly like a hare. The document recommends focusing on direction over speed and emphasizes the importance of stability, feedback, and continual learning to effectively manage technical debt.
Raphaël Pinson presented on implementing GitOps with the DevOps Stack. The DevOps Stack provides an opinionated Kubernetes stack that is deployed and managed using GitOps. It handles provisioning Kubernetes, integrating single sign-on, and managing observability tools through Argo CD. Argo CD syncs the cluster state with the desired manifests in Git, ensuring congruence. It also provides an interface for managing applications and templates. The DevOps Stack offers a standardized way to deploy common services and manage infrastructure as code.
The document summarizes key points from a presentation about open source, standards, and technical debt. It discusses how technical debt can go unnoticed but must eventually be paid back, and how following standards helps avoid issues related to not invented here syndrome. It also covers topics like loose coupling through immutability, team topologies as related to code ownership and debt dilution, and how public cloud can help delegate technical debt but introduce new dependencies. Throughout, it emphasizes that the important thing is not speed but direction when it comes to reducing technical debt over time.
The document discusses DevOps Stack, an open source project that provides tools and examples for deploying infrastructure as code using technologies like Puppet, Terraform, and Kubernetes. It provides an overview of the project and links to its website, GitHub, and similar projects. The document encourages joining the CampToCamp team behind DevOps Stack.
YAML has become the de-facto standard to express resources in many fields linked to DevOps practices. What are YAML’s strengths and weaknesses, and what are the other options going forward?
Containers and Kubernetes have revolutionized the way applications are deployed at scale. This new approach, along with the use of CI/CD for deployment automation, brings new challenges, in particular when it comes to security, as containers are static artifacts that require rebuilding and redeployment in order to perform updates. This talk will demonstrate how to set up an automated CI/CD pipeline to deploy applications on Kubernetes using OpenShift and GitLab, so that updates of public base images trigger rebuilds and deployments of derivative containers. It will also show how static image analysis can be plugged into the pipeline to increase application security.
This document discusses K9s, a rich Kubernetes client that provides a VIM-like interface for interacting with Kubernetes clusters. K9s does not require in-cluster installation but is instead a standalone Golang binary. It allows viewing and filtering Kubernetes resources, logs, port forwarding, and more through an intuitive interface with key bindings. Plugins can add additional functionality and views can be customized through skins defined in YAML.