With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions. As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel. CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics. This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved. KubeCon schedule link: http://sched.co/4VAo
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how concepts like routing mesh and load balancing work.
2023년 4월 6일에 진행한 "레드햇 오픈스택 17 저자 직강 + 스터디 그룹" 1주차 세션 슬라이드입니다.
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players. Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
Unique course notes for the Certified Kubernetes Administrator (CKA) for each section of the exam. Designed to be engaging and used as a reference in the future for kubernetes concepts.
This document provides an overview of IT automation using Ansible. It discusses using Ansible to automate tasks across multiple servers like installing packages and copying files without needing to login to each server individually. It also covers Ansible concepts like playbooks, variables, modules, and vault for securely storing passwords. Playbooks allow defining automation jobs as code that can be run on multiple servers simultaneously in a consistent and repeatable way.
The document compares eBPF, XDP and DPDK for packet inspection. It describes the speaker's experience using these tools to build a virtual machine that can handle 10Gbps of traffic and drop packets to mitigate DDoS attacks. It details how eBPF and XDP were able to achieve higher packet drop rates than iptables or a custom module. While DPDK could drop traffic at line rate, it required specialized hardware and expertise. Ultimately, XDP provided the best balance of performance, driver support and programmability using eBPF to drop millions of packets per second.
The Linux kernel is undergoing the most fundamental architecture evolution in history and is becoming a microkernel. Why is the Linux kernel evolving into a microkernel? The potentially biggest fundamental change ever happening to the Linux kernel. This talk covers how companies like Facebook and Google use BPF to patch 0-day exploits, how BPF will change the way features are added to the kernel forever, and how BPF is introducing a new type of application deployment method for the Linux kernel.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called pods. Kubernetes can manage pods across a cluster of machines, providing scheduling, deployment, scaling, load balancing, volume mounting and networking. It is widely used by companies like Google, CERN and in large projects like processing images and analyzing particle interactions. Kubernetes is portable, can span multiple cloud providers, and continues growing to support new workloads and use cases.
What plugins, tools and behaviors can help you get the most out of your Jenkins setup without all of the pain? We'll find out as we go over a set of Jenkins power tools, habits and best practices that will help with any Jenkins setup.
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
SOSCON 2019.10.17 What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel. Daniel T. Lee (Hoyeon Lee) @danieltimlee Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called Pods. ReplicaSets ensure that a specified number of pod replicas are running at any given time. Key components include Pods, Services for enabling network access to applications, and Deployments to update Pods and manage releases.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract: A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux! For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
2016년 4월 9일, Microsoft와 함께 하는 Community Open Camp에서 오픈스택 한국 커뮤니티 첫 번째 세션 자료입니다. 두 번째 자료는 다음 URL에서 확인 가능합니다 : http://www.slideshare.net/YooEdward/why-openstack-is-operating-system-60685165