Cilium is open source software for providing and transparently securing network connectivity and load balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. The foundation of Cilium is the new Linux kernel technology BPF which supports the dynamic insertion of BPF bytecode into the Linux kernel at various integration points. This presentation reveals the secrets of Kubernetes networking and gives you a deep dive into Cilium and why it is awesome!
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
The document discusses Kubernetes networking. It describes how Kubernetes networking allows pods to have routable IPs and communicate without NAT, unlike Docker networking which uses NAT. It covers how services provide stable virtual IPs to access pods, and how kube-proxy implements services by configuring iptables on nodes. It also discusses the DNS integration using SkyDNS and Ingress for layer 7 routing of HTTP traffic. Finally, it briefly mentions network plugins and how Kubernetes is designed to be open and customizable.
Session at ContainerDay Security 2023 on the 8th of March in Hamburg. Cilium is the next generation, eBPF powered open-source Cloud Native Networking solution, providing security, observability, scalability, and superior performance. Cilium is an incubating project under CNCF and the leading CNI for Kubernetes. In this session we will introduce the fundamentals of Cilium Network Policies and the basics of application-aware and Identity-based Security. We will discuss the default-allow and default-deny approaches and visualize the corresponding ingress and egress connections. Using the Network Policy Editor we will be able to demonstrate how a Cilium Network Policy looks like and what they mean on a given Kubernetes cluster. Additionally, we will walk through different examples and demonstrate how application traffic can be observed with Hubble and show how you can use the Network Policy Editor to apply new Cilium Network Policies for your workloads. Finally, we’ll demonstrate how Tetragon provides eBPF-based transparent security observability combined with real-time runtime enforcement.
This document summarizes a presentation about Cilium and eBPF. Cilium provides cloud native networking and security using eBPF. eBPF allows programs to run securely in the Linux kernel for networking, security, and observability. Cilium offers networking features like Kubernetes services, cluster mesh for multi-cluster connectivity, and platform integration. It also provides security using identity-based policies and API authorization. Observability features include flow visibility and service maps. Cilium can be used as a service mesh or with Tetragon for prevention capabilities without proxies.
The document discusses how BPF and XDP are revolutionizing network security and performance for microservices. BPF allows profiling, tracing, and running programs at the network driver level. It also enables highly performant networking functions like DDoS mitigation using XDP. Cilium uses BPF to provide layer 3-7 network security for microservices with policies based on endpoints, identities, and HTTP protocols. It integrates with Kubernetes to define network policies and secure microservice communication and APIs using eBPF programs for filtering and proxying.
Kubernetes has two simple but powerful network concepts: every Pod is connected to the same network, and Services let you talk to a Pod by name. Bryan will take you through how these concepts are implemented - Pod Networks via the Container Network Interface (CNI), Service Discovery via kube-dns and Service virtual IPs, then on to how Services are exposed to the rest of the world.
This document outlines an agenda for a workshop on Kubernetes networking with eBPF and Cilium. The workshop covers various topics including principles of eBPF and Cilium, Kubernetes networking, cluster mesh, security, observability, service mesh, and Tetragon. It provides overviews and examples for each topic. The workshop is presented by Raphaël Pinson who works on Cilium at Isovalent.
The document discusses Cilium and Istio with Gloo Mesh. It provides an overview of Gloo Mesh, an enterprise service mesh for multi-cluster, cross-cluster and hybrid environments based on upstream Istio. Gloo Mesh focuses on ease of use, powerful best practices built in, security, and extensibility. It allows for consistent API for multi-cluster north-south and east-west policy, team tenancy with service mesh as a service, and driving everything through GitOps.
The document discusses Cilium and cgroup eBPF applications. It provides an overview of Cilium, a history of cgroup eBPF in Linux kernels dating back to 2016, and how Cilium uses cgroup eBPF to implement services load balancing and network policy enforcement in Kubernetes. Specifically, it describes how the Cilium agent programs cgroup eBPF maps and programs based on Kubernetes services and endpoints, and how cgroup eBPF programs handle socket calls like connect and getpeername to implement load balancing and network address translation.
Kubernetes currently has two load balancing mode: userspace and IPTables. They both have limitation on scalability and performance. We introduced IPVS as third kube-proxy mode which scales kubernetes load balancer to support 50,000 services. Beyond that, control plane needs to be optimized in order to deploy 50,000 services. We will introduce alternative solutions and our prototypes with detailed performance data.
This document provides an introduction to eBPF and XDP. It discusses the history of BPF and how it evolved into eBPF. Key aspects of eBPF covered include the instruction set, JIT compilation, verifier, helper functions, and maps. XDP is introduced as a way to program the data plane using eBPF programs attached early in the receive path. Example use cases and performance benchmarks for XDP are also mentioned.
This talk demonstrates that programmability and performance does not require user space networking, it can be achieved in the kernel by generating BPF programs and leveraging the existing kernel subsystems. We will demo an early prototype which provides fast IPv6 & IPv4 connectivity to containers, container labels based security policy with avg cost O(1), and debugging and monitoring based on the per-cpu perf ring buffer. We encourage a lively discussion on the approach taken and next steps.
In this slide, we discussed the IPVS, including the introduction, demonstration, implementation, and integration in Kubernetes. IPVS was based on the netfilter and we discussed how it works with iptables and also compares the detail implementation in Kubernetes to show why IPVS has a better performance in IPTABLES.
GitOps Days Community Special Watch the video here: https://youtu.be/0v5bjysXTL8 New to GitOps or been a long-time Flux user? We'll walk you through the benefits of GitOps and then demo it in action with a sneak peak into the next gen Flux and GitOps Toolkit! * Automation! * Visibility! * Reconciliation! * Powerful use of Prometheus and Grafana! * GitOps for Helm! For Flux users, Flux v1 is decoupled into Flux v2 and GitOps Toolkit. We'll demo how this decoupling gives you more control over how you can do GitOps and with fewer steps! Join Leigh Capili and Tamao Nakahara as they show you GitOps in action with Flux and GitOps Toolkit. Note to our Flux community that Flux v2 and the GitOps Toolkit is in development and Flux v1 is in maintenance mode. These talks and upcoming guides will give you the most up-to-date info and steps to migrate once we reach feature parity and start the migration process. We are dedicated to the smoothest experience possible for our Flux community, so please join us if you'd like early access and to give us feedback for the migration process. We are really excited by the improvements and want to take this opportunity to show you what the GitOps Toolkit is all about, walk you through the guides and get your feedback! For more info, see https://toolkit.fluxcd.io/. Here's our latest blog post on Flux v2 and GitOps Toolkit updates: https://www.weave.works/blog/the-road-to-flux-v2-october-update
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
This talk provides a 101 introdution to Kubernetes from a user point of view. Aimed at service providers, it was presented at the GPN Annual Meeting 2019. https://conferences.k-state.edu/gpn/
Follow along in this free workshop and experience GitOps! AGENDA: Welcome - Tamao Nakahara, Head of DX (Weaveworks) Introduction to Kubernetes & GitOps - Mark Emeis, Principal Engineer (Weaveworks) Weave Gitops Overview - Tamao Nakahara Free Gitops Workshop - David Harris, Product Manager (Weaveworks) If you're new to Kubernetes and GitOps, we'll give you a brief introduction to both and how GitOps is the natural evolution of Kubernetes. Weave GitOps Core is a continuous delivery product to run apps in any Kubernetes. It is free and open source, and you can get started today! https://www.weave.works/product/gitops-core If you’re stuck, also come talk to us at our Slack channel! #weave-gitops http://bit.ly/WeaveGitOpsSlack (If you need to invite yourself to the Slack, visit https://slack.weave.works/)
Extended BPF (eBPF) provides a mechanism for running custom programs inside the Linux kernel that can be used for filtering network packets, monitoring system activity, and more. eBPF programs are written in a restricted subset of C and compiled to bytecode that is verified by the kernel for safety before being run. The BCC toolkit makes it easier to write and load eBPF programs. The IO Visor project aims to further develop eBPF and provide tools and use cases for networking, security, and system tracing applications.
BPF (Berkeley Packet Filter) has evolved from a limited virtual machine for efficient packet filtering to a new type of software called extended BPF. Extended BPF allows for custom, efficient, and production-safe performance analysis tools and observability programs to be run in the Linux kernel through BPF. It enables new event-based applications running as BPF programs attached to various kernel events like kprobes, uprobes, tracepoints, sockets, and more. Major companies like Facebook, Google, and Netflix are using BPF programs for tasks like intrusion detection, container security, firewalling, and observability with over 150,000 AWS instances running BPF programs. BPF provides a new program model and security features compared
At the core of fast network packet processing lies the ability to filter packets, or in other words, to apply a set of rules on packets, usually consisting of a pattern to match (L2 to L4 source and destination addresses and ports, protocols, etc.) and corresponding actions (redirect to a given queue, or drop the packet, etc.). Over the years, several filtering frameworks have been added to Linux. While at the lower level, ethtool can be used to configure N-tuple rules on the receive side for the hardware, the upper layers of the stack got equipped with rules for firewalling (Netfilter), traffic shaping (TC), or packet switching (Open vSwitch for example). In this presentation, Quentin Monnet reviewed the needs for those filtering frameworks and the particularities of each one. Then focuses on the changes brought by eBPF and XDP in this landscape: as BPF programs allow for very flexible processing and can be attached very low in the stack—at the driver level, or even run on the NIC itself—they offer filtering capabilities with no precedent in terms of performance and versatility in the kernel. Lastly, the third part explores potential leads in order to create bridges between the different rule formats and to make it easier for users to build their filtering eBPF programs.
Ed Warnicke's talk at Open Networking Summit. All Open Source Networking project depend on having access to a Universal Dataplane that is: Able to they deployment models: Bare Metal/Embedded/Cloud/Containers/NFVi/VNFs High performance Feature Rich Open with Broad Community support/participation FD.io provides all of this and more. Come learn more about FD.io and how you can begin using it.