KNI (Kernel Network Interface) is one of the approach, used in DPDK, to connect its user space libs with the Linux kernel network stack.
This document provides an agenda and overview for a hands-on lab on using DPDK in containers. It introduces Linux containers and how they use fewer system resources than VMs. It discusses how containers still use the kernel network stack, which is not ideal for SDN/NFV usages, and how DPDK can be used in containers to address this. The hands-on lab section guides users through building DPDK and Open vSwitch, configuring them to work with containers, and running packet generation and forwarding using testpmd and pktgen Docker containers connected via Open vSwitch.
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency. In this talk we revisit the changes the Linux block layer in the last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux. Sagi Grimberg Sagi is Principal Architect and co-founder at LightBits Labs.
generic step by step debug and troubleshoot for performance and functional issues found commonly in packet processing with DPDK.
Seven years ago at LCA, Van Jacobsen introduced the concept of net channels but since then the concept of user mode networking has not hit the mainstream. There are several different user mode networking environments: Intel DPDK, BSD netmap, and Solarflare OpenOnload. Each of these provides higher performance than standard Linux kernel networking; but also creates new problems. This talk will explore the issues created by user space networking including performance, internal architecture, security and licensing.
Video: https://www.facebook.com/atscaleevents/videos/1693888610884236/ . Talk by Brendan Gregg from Facebook's Performance @Scale: "Linux performance analysis has been the domain of ancient tools and metrics, but that's now changing in the Linux 4.x series. A new tracer is available in the mainline kernel, built from dynamic tracing (kprobes, uprobes) and enhanced BPF (Berkeley Packet Filter), aka, eBPF. It allows us to measure latency distributions for file system I/O and run queue latency, print details of storage device I/O and TCP retransmits, investigate blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. This talk will summarize this new technology and some long-standing issues that it can solve, and how we intend to use it at Netflix."
The document discusses Linux networking architecture and covers several key topics in 3 paragraphs or less: It first describes the basic structure and layers of the Linux networking stack including the network device interface, network layer protocols like IP, transport layer, and sockets. It then discusses how network packets are managed in Linux through the use of socket buffers and associated functions. The document also provides an overview of the data link layer and protocols like Ethernet, PPP, and how they are implemented in Linux.