Tracing MariaDB server with bpftrace - MariaDB Server Fest 2021
Bpftrace is a relatively new eBPF-based open source tracer for modern Linux versions (kernels 5.x.y) that is useful for analyzing production performance problems and troubleshooting software. Basic usage of the tool, as well as bpftrace one liners and advanced scripts useful for MariaDB DBAs are presented. Problems of MariaDB Server dynamic tracing with bpftrace and some possible solutions and alternative tracing tools are discussed.
This document discusses the Linux tracing tool systemtap. It provides an overview of systemtap and what it can be used for, including tracing system calls, kernel functions, and application functions. It also discusses how systemtap works, how it uses debugging symbols, and how RPMs handle separate debug information files. Several examples are given of using systemtap probes to trace requests for Nginx, cURL, Redis, MySQL, and TCP retransmissions. The document concludes by mentioning using DTrace for other languages beyond C, such as MySQL, Python, and Java.
Falco is an open source runtime security monitor for containers that detects anomalous activity using rules. It builds on Sysdig by instrumenting the kernel and collecting system calls and events. Falco rules define suspicious behaviors and integrate signals from the kernel, containers, and Kubernetes. Falco detects threats by matching patterns in real time and alerts on suspicious activity, helping operators enforce policies and spot abnormal behavior.
Perf is a collection of Linux kernel tools for performance monitoring and profiling. It provides sampling and profiling of the system to analyze performance bottlenecks. Perf supports hardware events from the CPU performance counters, software events from the kernel, and tracepoint events from the kernel and loaded modules. It offers tools like perf record to sample events and store them, perf report to analyze stored samples, and perf trace to trace system events in real-time.
This document provides an overview of basic RAC concepts, configuration, and management using Oracle Clusterware and srvctl commands. It describes how RAC instances share database files, redo logs, and undo segments stored in shared storage. It also explains how Clusterware manages resources and monitors cluster health using voting disks and the Oracle Cluster Registry (OCR).
This document discusses measuring and optimizing data processing performance on CPUs and GPUs. It provides examples using Cupy, NumPy, Apache Ignite, and ROCm to perform operations and measure throughput on different hardware. Specific examples include sorting random data on CPU and GPU to compare performance, running Apache Ignite benchmarks on different CPUs, and using ROCm and PyOpenCL to leverage an AMD GPU for general-purpose computing. The document emphasizes that tightly coupling fast memory and computation is important for workloads with large data processing requirements.
Kernel Recipes 2017: Performance Analysis with BPF
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
This document discusses Linux tracing tools and the evolution from DTrace on BSD to eBPF on Linux. It begins with an overview of DTrace and its capabilities on BSD, then discusses the limitations of early Linux tracing tools. It introduces eBPF and the BCC compiler collection, which make it easier to write and use eBPF programs. Examples are given showing how BCC can be used to trace system calls, file opens, and command executions. The document argues that BCC and eBPF help address the problems of early Linux tracing by making the tools more approachable and powerful for production use.
True stories on the analysis of network activity using Python
The document discusses network packet analysis using Python. It provides an overview of network analysis tools like Wireshark and tcpdump, and how to use them to analyze network traffic captured in a pcap file. It also discusses how to create and send network packets using Scapy for tasks like port scanning, and how to filter network traffic using IPv4/IPv6 packet filters like iptables. The document provides examples of summarizing pcap data and crafting network packets for various protocols.
University of Virginia
cs4414: Operating Systems
http://rust-class.org
What happened with Apple's SSL implementation
How to make sure this doesn't happen to you!
Sharing data
ARCs in Rust
Scheduling
For embedded notes, see:
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernel
At a rate of almost 9 changes per hour (24/7), the Linux kernel is definitely a scary beast. Bugs are introduced on a daily basis and, through the use of multiple code analyzers, *some* of them are detected and fixed before they hit mainline. Over the course of the last few years, Gustavo has been fixing such bugs and many different issues in every corner of the Linux kernel. Recently, he was in charge of leading the efforts to globally enable -Wimplicit-fallthrough; which appears by default in Linux v5.3. This presentation is a report on all the stuff Gustavo has found and fixed in the kernel with the support of the Core Infrastructure Initiative.
Gustavo A.R. Silva
BPF (Berkeley Packet Filter) allows for safe dynamic program injection into the Linux kernel. It provides an in-kernel virtual machine and instruction set for running custom programs. The BPF infrastructure includes a verifier that checks programs for safety, helper functions to access kernel APIs, and maps for inter-process communication. BPF has become a core kernel subsystem and is used for applications like XDP, tracing, networking, and more.
Практический опыт профайлинга и оптимизации производительности Ruby-приложений
Алексей Туля, Senior Software Developer в Sam Solutions
«Практический опыт профайлинга и оптимизации производительности Ruby-приложений»
В своем докладе Алексей сделает краткий обзор различных реализаций Ruby, попытается найти причины, почему Ruby медленный. Рассмотрит вопрос сборки мусора в Ruby и вызова методов – почему в Ruby это дорого. Расскажет и покажет, что делать, чтобы поднять производительность, проведет обзор утилит для поиска проблемных мест, обзор профайлеров и расскажет, как интерпретировать результаты.
Доклад в основном нацелен на практический подход по поиску проблем. Материал предназначен для пользователей Linux, поэтому все практические советы будут для ОС Linux.
1. The document discusses device drivers in Linux, including file types, driver registration, hotplug, MMC size and partitions, and request queues and elevators.
2. It explains the process of an application opening a device file, how drivers are registered with the kernel, and how devices are handled when hotplugged.
3. Details are provided on how MMC sizes and partitions are represented, and how request queues and elevators are used to process I/O requests to block devices in an efficient manner.
This document discusses advanced Linux firewall configuration using Netfilter and Iptables. It begins with an introduction of the speaker and an overview of the topics to be covered, including packet processing, connection tracking, iptables rules and tables, iptables modules, and managing firewall rules for cloud environments. The document then delves into technical details like the sk_buff packet representation in Linux, the Netfilter packet flow, basic iptables usage, and differences between stateful and stateless firewalls.
1. The document describes various Moshell commands used for managing RBS nodes.
2. The acc 0 manualrestart command is used to restart the RBS node, while the pol 5 5 command polls the node every 5 seconds to check when the MO service is ready after restart.
3. Other commands described are for checking CV configuration (cvcu, cvls), managing CVs (cvset, cvmk, cvrm), and accessing measurement data (st mme, ue print).
This document provides information on various debugging and profiling tools that can be used for Ruby including:
- lsof to list open files for a process
- strace to trace system calls and signals
- tcpdump to dump network traffic
- google perftools profiler for CPU profiling
- pprof to analyze profiling data
It also discusses how some of these tools have helped identify specific performance issues with Ruby like excessive calls to sigprocmask and memcpy calls slowing down EventMachine with threads.
This document provides an overview of Rubinius, an implementation of the Ruby programming language that uses Just-In-Time (JIT) compilation. It discusses key aspects of Rubinius including its virtual machines, garbage collectors, bytecode compilers, core library, primitives systems, JIT compilers, and use of Rubyspec for testing. It also briefly describes the compacting generational garbage collector, inline caching, call counting, and debugging and profiling capabilities. The document encourages contributing to Rubinius and shares an example commit adding POSIX safety checks to the Process module.
The document discusses building a Ruby debugger by collecting data on Ruby objects in memory and analyzing that data. It describes two versions of the debugger: Version 1 collects basic data but requires patching Ruby and has limited analysis, while Version 2 called Memprof collects more detailed data in JSON format without patching Ruby and allows deeper analysis using MongoDB. The second version provides a way to visualize and analyze Ruby memory usage and detect potential memory leaks.
This document discusses open source hardware and provides information on file formats and documentation needed for open hardware projects, including mechanical diagrams, schematics, component lists, layout diagrams, firmware, and how to build a Debian root filesystem for embedded systems. It also summarizes some open source operating systems optimized for specific open hardware platforms like the Raspberry Pi and Cubieboard.
OpenStack is a cloud operating system that controls large pools of computing, storage, and networking resources throughout a datacenter. It is made up of several components like Horizon (dashboard), Keystone (identity), Nova (compute), Neutron (networking), Swift (object storage), and Cinder (block storage) that are all managed through a central dashboard. DevStack is a shell script used to deploy a complete OpenStack development environment locally and is useful for OpenStack development by beginners. The presentation demonstrated installing and using DevStack to deploy an OpenStack development environment.
This document outlines the steps for a first contribution to the OpenStack project, including getting necessary accounts, setting up a development environment, selecting a bug to work on, fixing the bug, committing changes, and submitting the patch for review. Key steps are registering Launchpad and OpenStack Foundation accounts, cloning the devstack repository to set up a development environment, finding a suitable bug to work on, addressing it by creating a topic branch and testing changes, committing the fix, and submitting it for upstream review. Contact information is provided for questions.
The document discusses CoprHD, an open source software-defined storage controller that automates storage provisioning across heterogeneous storage infrastructure. It summarizes CoprHD's key capabilities in automating storage lifecycle management and integrating with cloud stacks like OpenStack. The document also provides an overview of CoprHD architecture and describes how CoprHD can operate as a Cinder driver within OpenStack. It outlines CoprHD's interoperability with OpenStack through different integration methods and concludes with information on the CoprHD community.
Openstack-Ansible is a Rackspace initiative that provides an automated way to deploy OpenStack using Ansible playbooks and roles. It pulls services from Git repositories and uses LXC containers and Ansible to deploy OpenStack on single to thousands of nodes in a scalable way. The document discusses why OpenStack deployment is difficult, outlines the OSAD architecture, configuration, and usage, and how OpenStack services are deployed and scaled out to additional compute nodes using Openstack-Ansible.
The document provides an overview of the OpenStack contribution workflow, including setting up accounts, contributing code through Gerrit code reviews, tracking bugs on Launchpad, and submitting code changes through the Git review process. Key steps are registering an IRC and Launchpad account, signing the CLA, adding SSH keys, reviewing code on Gerrit, tracking bugs on Launchpad, submitting code changes through Git, and addressing feedback in the review process. Project mailing lists, IRC channels, and the check queue status site are also referenced as resources for contributors.
Tempest is an Openstack test suite which runs against all the OpenStack service endpoints. It makes sure that all the OpenStack components work together properly and that no APIs are changed. Tempest is a "gate" for all commits to OpenStack repositories and will prevent merges if tests fail.
GUTS is a workload migration engine that automatically migrates existing workloads and virtual machines from previous generation virtualization platforms to OpenStack. It supports migrating VMs, volumes, networks, users, and other resources between OpenStack environments or from platforms like VMware to OpenStack. GUTS has API, scheduler, and migration services to orchestrate the migrations. It can convert disk formats and manage hypervisor-specific tools during the migration process. Future plans include supporting more hypervisors and resource types.
The document discusses Ceph, an open-source software-defined storage platform commonly used with OpenStack. It provides an overview of Ceph attributes, architecture, components like monitors, OSDs and placement groups, and how it can provide unified storage. New features in the recent Ceph Jewel release are also covered, such as RBD mirroring and RADOS gateway improvements. The presentation aims to establish Ceph as the preferred storage solution ("buddy") for OpenStack deployments.
Tempest is the OpenStack integration test suite. It uses unittest and nosetest frameworks to run API calls against OpenStack services like Nova, Glance, Keystone, etc. and validate the responses. Tempest tests include smoke, positive, negative, stress and white box tests. It has a modular structure with common, services, and tests directories. Tempest plays an important role in OpenStack continuous integration by running on proposed code changes to check for regressions.
This document summarizes two OpenStack container projects - Magnum and Zun. Magnum provides an API to manage container infrastructure by leveraging Heat, Nova, and Neutron to provision container orchestration engines like Kubernetes and Docker Swarm. Zun provides a container service with APIs for launching and managing containers across different technologies in an integrated manner with OpenStack services like Keystone, Nova, Neutron, Glance, and Cinder. The document compares the two projects and suggests using Magnum when wanting OpenStack to provide infrastructure for self-managed containers, and using Zun when wanting OpenStack to provision and manage containers directly.
This document summarizes a presentation about Open Platform for Network Functions Virtualization (OPNFV). It discusses NFV challenges for telecom operators and introduces OPNFV as an open source platform that aims to develop and test an integrated virtual network functions infrastructure. Key aspects of OPNFV covered include its reference architecture, goals of contributing to relevant open source projects and establishing an NFV ecosystem, and examples of feature development and community labs/testing activities.
Reverse engineering Swisscom's Centro Grande Modem
The document discusses reverse engineering the firmware of Swisscom's Centro Grande modems. It identifies several vulnerabilities found, including a command overflow issue that allows complete control of the device by exceeding the input buffer, and multiple buffer overflow issues that can be exploited to execute code remotely by crafting specially formatted XML files. Details are provided on the exploitation techniques and timeline of coordination with Swisscom to address the vulnerabilities.
- The document discusses various Linux system log files such as /var/log/messages, /var/log/secure, and /var/log/cron and provides examples of log entries.
- It also covers log rotation tools like logrotate and logwatch that are used to manage log files.
- Networking topics like IP addressing, subnet masking, routing, ARP, and tcpdump for packet sniffing are explained along with examples.
This document discusses the Linux tracing tool systemtap. It provides an overview of systemtap and what it can be used for, including tracing system calls, kernel functions, and application functions. It also discusses how systemtap works, how it uses debugging symbols, and how RPMs handle separate debug information files. Several examples are given of using systemtap probes to trace requests for Nginx, cURL, Redis, MySQL, and TCP retransmissions. The document suggests systemtap can be used beyond C for tracing languages like MySQL, Python and Java.
The document summarizes Maycon Vitali's presentation on hacking embedded devices. It includes an agenda covering extracting firmware from devices using tools like BusPirate and flashrom, decompressing firmware to view file systems and binaries, emulating binaries using QEMU, reverse engineering code to find vulnerabilities, and details four vulnerabilities discovered in Ubiquiti networking devices designated as CVEs. The presentation aims to demonstrate common weaknesses in embedded device security and how tools can be used to analyze and hack these ubiquitous connected systems.
Using Libtracecmd to Analyze Your Latency and Performance Troubles
Trying to figure out why your application is responding late can be difficult, especially if it is because of interference from the operating system. This talk will briefly go over how to write a C program that can analyze what in the Linux system is interfering with your application. It will use trace-cmd to enable kernel trace events as well as tracing lock functions, and it will then go over a quick tutorial on how to use libtracecmd to read the created trace.dat file to uncover what is the cause of interference to you application.
Velocity 2017 Performance analysis superpowers with Linux eBPF
Talk by for Velocity 2017 by Brendan Gregg: Performance analysis superpowers with Linux eBPF.
"Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will investigate this new technology, which sooner or later will be available to everyone who uses Linux. The talk will dive deep on these new tracing, observability, and debugging capabilities. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
This document discusses the evolution of systems performance analysis tools from closed source to open source environments.
In the early 2000s with Solaris 9, performance analysis was limited due to closed source tools that provided only high-level metrics. Opening the Solaris kernel code with OpenSolaris in 2005 allowed deeper insight through understanding undocumented metrics and dynamic tracing tools like DTrace. This filled observability gaps across the entire software stack.
Modern performance analysis leverages both traditional Unix tools and new dynamic tracing tools. With many high-resolution metrics available, the focus is on visualization and collecting metrics across cloud environments. Overall open source improved systems analysis by providing full source code access.
USENIX ATC 2017 Performance Superpowers with Enhanced BPF
Talk for USENIX ATC 2017 by Brendan Gregg
"The Berkeley Packet Filter (BPF) in Linux has been enhanced in very recent versions to do much more than just filter packets, and has become a hot area of operating systems innovation, with much more yet to be discovered. BPF is a sandboxed virtual machine that runs user-level defined programs in kernel context, and is part of many kernels. The Linux enhancements allow it to run custom programs on other events, including kernel- and user-level dynamic tracing (kprobes and uprobes), static tracing (tracepoints), and hardware events. This is finding uses for the generation of new performance analysis tools, network acceleration technologies, and security intrusion detection systems.
This talk will explain the BPF enhancements, then discuss the new performance observability tools that are in use and being created, especially from the BPF compiler collection (bcc) open source project. These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and much more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations.
Because these BPF enhancements are only in very recent Linux (such as Linux 4.9), most companies are not yet running new enough kernels to be exploring BPF yet. This will change in the next year or two, as companies including Netflix upgrade their kernels. This talk will give you a head start on this growing technology, and also discuss areas of future work and unsolved problems."
pstack, truss etc to understand deeper issues in Oracle database
The document discusses various process monitoring and debugging tools for Oracle databases like truss, pstack and pfiles. It provides examples of using truss to trace system calls of processes like PMON and DBWR. It demonstrates how truss can be used to see shared memory segment creation during database startup and process attachment. It also summarizes the process creation steps seen during connection creation in Oracle.
Talk for AWS re:Invent 2014. Video: https://www.youtube.com/watch?v=7Cyd22kOqWc . Netflix tunes Amazon EC2 instances for maximum performance. In this session, you learn how Netflix configures the fastest possible EC2 instances, while reducing latency outliers. This session explores the various Xen modes (e.g., HVM, PV, etc.) and how they are optimized for different workloads. Hear how Netflix chooses Linux kernel versions based on desired performance characteristics and receive a firsthand look at how they set kernel tunables, including hugepages. You also hear about Netflix’s use of SR-IOV to enable enhanced networking and their approach to observability, which can exonerate EC2 issues and direct attention back to application performance.
This document discusses the crash reporting mechanism in Tizen. It describes the crash client, which handles crash signals and generates crash reports. It covers Samsung's crash-work-sdk and Intel's corewatcher crash clients. It also discusses the crash server that receives reports and the CrashDB web interface. Finally, it mentions crash reason location algorithms.
When your whole system is unresponsive, how to investigate on this failure ?
We'll see how to get a memory dump for offline analysis with kdump system.
Then how to analyze it with crash utility.
And finally, how to use crash on a running system to modify the kernel memory (at your own risks !)
Oracle Architecture document discusses:
1. The cost of an Oracle Enterprise Edition license is $47,500 per processor.
2. It provides an overview of key Oracle components like the instance, database, listener and cost based optimizer.
3. It demonstrates how to start an Oracle instance, check active processes, mount and open a database, and query it locally and remotely after starting the listener.
CloudForecast is a system monitoring and visualization tool that uses Perl and RRDTool to collect data from servers and generate graphs. It collects metrics like CPU usage, network traffic, and Gearman worker status. Data is stored in RRD files and a SQLite database. A radar component collects data and a web interface is used to view graphs generated from the collected data.
Building an Automated Behavioral Malware Analysis Environment using Free and ...
The document describes building an automated malware behavioral analysis environment using free and open-source tools. It details setting up analysis machines running Debian, installing analysis tools including Volatility, RegRipper, and AIDE. Samples are submitted to the machines via SSH and analyzed for network traffic using tools like tcpdump, DNS queries with fauxDNS, and open ports with connections. The results including OS identification, registry changes, and network indicators are summarized for analysts.
This document provides an introduction to DTrace and discusses its key features and capabilities. It covers:
1. What DTrace is and how it can be used to trace operating systems and programs with very low overhead.
2. The different ways DTrace can be used, including tracing system calls, kernel functions, user processes, and custom probes added to programs.
3. How DTrace scripts are structured using probes, filters, and actions. Variables that can be used like timestamps.
4. Examples of using DTrace to trace network activity by probe name, argument definitions, and creating DTrace programs.
This slide will show you how to use SOFA to do performance analysis of CPU/GPU cooperative programs, especially programs running with deep software stacks like TensorFlow, PyTorch, etc.
source code at:
https://github.com/cyliustack/sofa
OSSNA 2017 Performance Analysis Superpowers with Linux BPF
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
The document discusses hacking the Swisscom modem by exploiting default credentials to gain access. Upon login, the author runs commands to investigate the system such as viewing configuration files and mapping the internal network. Various system details are discovered including the Linux kernel version and software components.
The document summarizes Engine Yard's Partner Junction Program for Q1 2014. It outlines the program's mission to leverage Engine Yard's market leadership through strategic partnerships. It details benefits for partners at different tiers, including access to client usage reports, performance monitoring, project leads, and financial incentives. Joint marketing activities are also described, such as web presence, blog posts, case studies, and co-hosted events. The goal is for partners to engage, execute, and excel with Engine Yard.
Topics Covered:
• How to deploy a PHP application to Engine Yard
• How to use Composer to automate dependency management
• The key differences between Orchestra and Engine Yard Cloud
We’re excited to announce that we are evolving our cloud application architecture to be more flexible and modular, giving you greater control of your environment and more choices for components, deployment options and infrastructure.
During this webcast we'll provide more information on Engine Yard Cloud's new cluster model, infrastructure abstraction layer and monitoring and alerting agent, share what's coming and have an open Q&A to answer your questions.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Achieving PCI compliance can be a complex, time-consuming, and expensive undertaking. However, with the right approach it can be substantially less burdensome. In this webcast, we will provide background and recommendations to help you make the best possible decisions regarding PCI for your PaaS-based application. If you currently accept, or are contemplating accepting a payment card on your web application, this webcast is for you.
In this presentation you will learn about:
-An overview of PCI
-How to scope your environment for PCI compliance
-Ways to make compliance more manageable, and
-Things to consider when approaching PCI compliance on a PaaS provider.
To view the full webcast on-demand: http://pages.engineyard.com/an-introduction-to-pci-compliance-on-a-paas.html
Presenter: Danish Khan
Presentation from: RubyConf Uruguay
Date: November 12, 2011
Description:
Most developers hate having to write documentation, yet complain about how tools and libraries we use lack documentation. How do you get developers to write good documentation without feeling like they're wasting their time? There are plenty of good documentation tools out there such as TomDoc, YarDoc, and RDoc. These tools are useful for creating documentation for tools, gems and varies open source projects and each one has it's unique way of making documentation easier for developers. However, how do you manage documentation for a product? At Engine Yard we have our Engine Yard Cloud platform. Good external documentation for our customers is very important to us. We want to make sure they can easily understand how to use our platform and be able to accomplish what they need. However, it has been difficult to get good documentation out quickly.
Check out the audio from Danish's talk here:
http://www.eventials.com/rubyconfuy/recorded/M2UzZTJkMzY2MzdiNTg2NTUxNWM1MzI3NWY1YjRhMzYjIzQ1Ng_3D_3D
Innovate Faster in the Cloud with a Platform as a Service
Presentaion: "Innovate Faster in the Cloud with a PaaS" webinar
Presenter: Jacob Lehrbaum
Date: November 18, 2011
Recorded presentation:
http://pages.engineyard.com/InnovateFasterwithPaaS.html
If you are building a new application today you are likely considering a move to the cloud. If so, you should take a careful look at Platform as a Service (PaaS). Using a PaaS makes it fast and easy to deploy and run high-impact applications by relieving the developer from having to integrate, configure, test, and maintain the platform-level software necessary to run applications. It will also improve your uptime, help you scale with your business and can even save you money.
The document is about an introductory lesson on Ruby programming. It discusses Ruby's history and creator Yukihiro Matsumoto. It then covers basic Ruby concepts like variables, methods, and classes through examples. It also provides instructions on installing Ruby on Mac and Windows systems. The overall message is that learning Ruby can be productive and enjoyable.
Hiro Asari's Devoxx 2011 presentation
Presentation description:
Java developers wear many hats: they manage builds, develop applications, write command-line scripts, and must master all tiers. If only there were a way to make these tasks simple and fun.
Enter JRuby.
Build engineers can write or enhance builds with Ruby, never losing a thing they depend on from Ant or Maven. Ruby offers several elegant testing options that work great with JRuby. Web developers can create Rails applications in minutes, effortlessly incorporating the latest Web technologies while taking advantage of the existing Java libraries. JRuby supports binding native libraries with FFI (foreign function interface). Command-line scripts? They're easy with JRuby's system-level features.
Come to this session to learn how JRuby makes you a happy developer.
The document discusses the differences between evented and threaded concurrency models for Ruby applications. It explains that evented concurrency handles I/O events asynchronously while threaded concurrency uses threads to perform actual work. The document recommends using an evented model with libraries like Nginx and Trinidad to serve web applications, allowing code to be written as if it were threaded for simplicity.
Release Early & Release Often: Reducing Deployment Friction
Andy Delcambre's RubyConf 2011 presentation
Presentation Description:
At Engine Yard, we release the main Engine Yard Cloud code base at least once a day, many times more often than that. Yet we still have a fairly rigorous testing and release process. We have simply automated and connected as much of the process as possible. This talk covers how we handle deployments, how it ties in with our continuous integration service, and how we automate and tie it all together.
Recorded presentation:
http://confreaks.net/videos/667-rubyconf2011-release-early-and-release-often-reducing-deployment-friction
This document provides an overview of JRuby, highlighting both advantages and disadvantages compared to Ruby implementations. Key points include:
- JRuby runs Ruby code on the Java Virtual Machine (JVM), allowing access to Java libraries and tools while retaining Ruby syntax and semantics.
- The memory footprint of JRuby applications is initially larger than CRuby due to object sizes, but memory usage over time can be smaller with JRuby's garbage collection.
- Features like fork, continuations, and some extensions may be missing or disabled in JRuby.
- JRuby provides multiple Ruby versions and allows running multiple Ruby applications in a single JVM process.
- Performance benchmarks show JRuby can be competitive with
This document summarizes how to deploy a full Prometheus monitoring stack using Juju charms. It first introduces Juju and how it can orchestrate services. It then describes how Prometheus and Telegraf are deployed as charms and related to collect metrics from other services like HAProxy, MediaWiki, MariaDB and Memcached. The Prometheus and Telegraf charms, along with other supporting charms, are open source and available for deploying a complete monitoring stack.
This document provides an overview of kernel debugging on Solaris systems using the modular debugger Mdb and dynamic tracing framework DTrace. It discusses debugging live kernels with Mdb, analyzing system crash dumps with Mdb, and using DTrace to monitor the kernel at runtime by enabling probes published by different providers. The document outlines the key tools, techniques, and challenges involved in kernel debugging and crash analysis on Solaris.
This document discusses PostgreSQL and Solaris as a low-cost platform for medium to large scale critical scenarios. It provides an overview of PostgreSQL, highlighting features like MVCC, PITR, and ACID compliance. It describes how Solaris and PostgreSQL integrate well, with benefits like DTrace support, scalability on multicore/multiprocessor systems, and Solaris Cluster support. Examples are given for installing PostgreSQL on Solaris using different methods, configuring zones for isolation, using ZFS for storage, and monitoring performance with DTrace scripts.
Tracing MariaDB server with bpftrace - MariaDB Server Fest 2021Valeriy Kravchuk
Bpftrace is a relatively new eBPF-based open source tracer for modern Linux versions (kernels 5.x.y) that is useful for analyzing production performance problems and troubleshooting software. Basic usage of the tool, as well as bpftrace one liners and advanced scripts useful for MariaDB DBAs are presented. Problems of MariaDB Server dynamic tracing with bpftrace and some possible solutions and alternative tracing tools are discussed.
This document discusses the Linux tracing tool systemtap. It provides an overview of systemtap and what it can be used for, including tracing system calls, kernel functions, and application functions. It also discusses how systemtap works, how it uses debugging symbols, and how RPMs handle separate debug information files. Several examples are given of using systemtap probes to trace requests for Nginx, cURL, Redis, MySQL, and TCP retransmissions. The document concludes by mentioning using DTrace for other languages beyond C, such as MySQL, Python, and Java.
Falco is an open source runtime security monitor for containers that detects anomalous activity using rules. It builds on Sysdig by instrumenting the kernel and collecting system calls and events. Falco rules define suspicious behaviors and integrate signals from the kernel, containers, and Kubernetes. Falco detects threats by matching patterns in real time and alerts on suspicious activity, helping operators enforce policies and spot abnormal behavior.
Performance Analysis Tools for Linux Kernellcplcp1
Perf is a collection of Linux kernel tools for performance monitoring and profiling. It provides sampling and profiling of the system to analyze performance bottlenecks. Perf supports hardware events from the CPU performance counters, software events from the kernel, and tracepoint events from the kernel and loaded modules. It offers tools like perf record to sample events and store them, perf report to analyze stored samples, and perf trace to trace system events in real-time.
This document provides an overview of basic RAC concepts, configuration, and management using Oracle Clusterware and srvctl commands. It describes how RAC instances share database files, redo logs, and undo segments stored in shared storage. It also explains how Clusterware manages resources and monitors cluster health using voting disks and the Oracle Cluster Registry (OCR).
This document discusses measuring and optimizing data processing performance on CPUs and GPUs. It provides examples using Cupy, NumPy, Apache Ignite, and ROCm to perform operations and measure throughput on different hardware. Specific examples include sorting random data on CPU and GPU to compare performance, running Apache Ignite benchmarks on different CPUs, and using ROCm and PyOpenCL to leverage an AMD GPU for general-purpose computing. The document emphasizes that tightly coupling fast memory and computation is important for workloads with large data processing requirements.
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
Linux Tracing Superpowers by Eugene PirogovPivorak MeetUp
This document discusses Linux tracing tools and the evolution from DTrace on BSD to eBPF on Linux. It begins with an overview of DTrace and its capabilities on BSD, then discusses the limitations of early Linux tracing tools. It introduces eBPF and the BCC compiler collection, which make it easier to write and use eBPF programs. Examples are given showing how BCC can be used to trace system calls, file opens, and command executions. The document argues that BCC and eBPF help address the problems of early Linux tracing by making the tools more approachable and powerful for production use.
True stories on the analysis of network activity using Pythondelimitry
The document discusses network packet analysis using Python. It provides an overview of network analysis tools like Wireshark and tcpdump, and how to use them to analyze network traffic captured in a pcap file. It also discusses how to create and send network packets using Scapy for tasks like port scanning, and how to filter network traffic using IPv4/IPv6 packet filters like iptables. The document provides examples of summarizing pcap data and crafting network packets for various protocols.
University of Virginia
cs4414: Operating Systems
http://rust-class.org
What happened with Apple's SSL implementation
How to make sure this doesn't happen to you!
Sharing data
ARCs in Rust
Scheduling
For embedded notes, see:
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernelAnne Nicolas
At a rate of almost 9 changes per hour (24/7), the Linux kernel is definitely a scary beast. Bugs are introduced on a daily basis and, through the use of multiple code analyzers, *some* of them are detected and fixed before they hit mainline. Over the course of the last few years, Gustavo has been fixing such bugs and many different issues in every corner of the Linux kernel. Recently, he was in charge of leading the efforts to globally enable -Wimplicit-fallthrough; which appears by default in Linux v5.3. This presentation is a report on all the stuff Gustavo has found and fixed in the kernel with the support of the Core Infrastructure Initiative.
Gustavo A.R. Silva
BPF (Berkeley Packet Filter) allows for safe dynamic program injection into the Linux kernel. It provides an in-kernel virtual machine and instruction set for running custom programs. The BPF infrastructure includes a verifier that checks programs for safety, helper functions to access kernel APIs, and maps for inter-process communication. BPF has become a core kernel subsystem and is used for applications like XDP, tracing, networking, and more.
Практический опыт профайлинга и оптимизации производительности Ruby-приложенийOlga Lavrentieva
Алексей Туля, Senior Software Developer в Sam Solutions
«Практический опыт профайлинга и оптимизации производительности Ruby-приложений»
В своем докладе Алексей сделает краткий обзор различных реализаций Ruby, попытается найти причины, почему Ruby медленный. Рассмотрит вопрос сборки мусора в Ruby и вызова методов – почему в Ruby это дорого. Расскажет и покажет, что делать, чтобы поднять производительность, проведет обзор утилит для поиска проблемных мест, обзор профайлеров и расскажет, как интерпретировать результаты.
Доклад в основном нацелен на практический подход по поиску проблем. Материал предназначен для пользователей Linux, поэтому все практические советы будут для ОС Linux.
1. The document discusses device drivers in Linux, including file types, driver registration, hotplug, MMC size and partitions, and request queues and elevators.
2. It explains the process of an application opening a device file, how drivers are registered with the kernel, and how devices are handled when hotplugged.
3. Details are provided on how MMC sizes and partitions are represented, and how request queues and elevators are used to process I/O requests to block devices in an efficient manner.
This document discusses advanced Linux firewall configuration using Netfilter and Iptables. It begins with an introduction of the speaker and an overview of the topics to be covered, including packet processing, connection tracking, iptables rules and tables, iptables modules, and managing firewall rules for cloud environments. The document then delves into technical details like the sk_buff packet representation in Linux, the Netfilter packet flow, basic iptables usage, and differences between stateful and stateless firewalls.
1. The document describes various Moshell commands used for managing RBS nodes.
2. The acc 0 manualrestart command is used to restart the RBS node, while the pol 5 5 command polls the node every 5 seconds to check when the MO service is ready after restart.
3. Other commands described are for checking CV configuration (cvcu, cvls), managing CVs (cvset, cvmk, cvrm), and accessing measurement data (st mme, ue print).
This document provides information on various debugging and profiling tools that can be used for Ruby including:
- lsof to list open files for a process
- strace to trace system calls and signals
- tcpdump to dump network traffic
- google perftools profiler for CPU profiling
- pprof to analyze profiling data
It also discusses how some of these tools have helped identify specific performance issues with Ruby like excessive calls to sigprocmask and memcpy calls slowing down EventMachine with threads.
This document provides an overview of Rubinius, an implementation of the Ruby programming language that uses Just-In-Time (JIT) compilation. It discusses key aspects of Rubinius including its virtual machines, garbage collectors, bytecode compilers, core library, primitives systems, JIT compilers, and use of Rubyspec for testing. It also briefly describes the compacting generational garbage collector, inline caching, call counting, and debugging and profiling capabilities. The document encourages contributing to Rubinius and shares an example commit adding POSIX safety checks to the Process module.
The document discusses building a Ruby debugger by collecting data on Ruby objects in memory and analyzing that data. It describes two versions of the debugger: Version 1 collects basic data but requires patching Ruby and has limited analysis, while Version 2 called Memprof collects more detailed data in JSON format without patching Ruby and allows deeper analysis using MongoDB. The second version provides a way to visualize and analyze Ruby memory usage and detect potential memory leaks.
This document discusses open source hardware and provides information on file formats and documentation needed for open hardware projects, including mechanical diagrams, schematics, component lists, layout diagrams, firmware, and how to build a Debian root filesystem for embedded systems. It also summarizes some open source operating systems optimized for specific open hardware platforms like the Raspberry Pi and Cubieboard.
OpenStack is a cloud operating system that controls large pools of computing, storage, and networking resources throughout a datacenter. It is made up of several components like Horizon (dashboard), Keystone (identity), Nova (compute), Neutron (networking), Swift (object storage), and Cinder (block storage) that are all managed through a central dashboard. DevStack is a shell script used to deploy a complete OpenStack development environment locally and is useful for OpenStack development by beginners. The presentation demonstrated installing and using DevStack to deploy an OpenStack development environment.
This document outlines the steps for a first contribution to the OpenStack project, including getting necessary accounts, setting up a development environment, selecting a bug to work on, fixing the bug, committing changes, and submitting the patch for review. Key steps are registering Launchpad and OpenStack Foundation accounts, cloning the devstack repository to set up a development environment, finding a suitable bug to work on, addressing it by creating a topic branch and testing changes, committing the fix, and submitting it for upstream review. Contact information is provided for questions.
The document discusses CoprHD, an open source software-defined storage controller that automates storage provisioning across heterogeneous storage infrastructure. It summarizes CoprHD's key capabilities in automating storage lifecycle management and integrating with cloud stacks like OpenStack. The document also provides an overview of CoprHD architecture and describes how CoprHD can operate as a Cinder driver within OpenStack. It outlines CoprHD's interoperability with OpenStack through different integration methods and concludes with information on the CoprHD community.
Openstack-Ansible is a Rackspace initiative that provides an automated way to deploy OpenStack using Ansible playbooks and roles. It pulls services from Git repositories and uses LXC containers and Ansible to deploy OpenStack on single to thousands of nodes in a scalable way. The document discusses why OpenStack deployment is difficult, outlines the OSAD architecture, configuration, and usage, and how OpenStack services are deployed and scaled out to additional compute nodes using Openstack-Ansible.
The document provides an overview of the OpenStack contribution workflow, including setting up accounts, contributing code through Gerrit code reviews, tracking bugs on Launchpad, and submitting code changes through the Git review process. Key steps are registering an IRC and Launchpad account, signing the CLA, adding SSH keys, reviewing code on Gerrit, tracking bugs on Launchpad, submitting code changes through Git, and addressing feedback in the review process. Project mailing lists, IRC channels, and the check queue status site are also referenced as resources for contributors.
Tempest is an Openstack test suite which runs against all the OpenStack service endpoints. It makes sure that all the OpenStack components work together properly and that no APIs are changed. Tempest is a "gate" for all commits to OpenStack repositories and will prevent merges if tests fail.
GUTS is a workload migration engine that automatically migrates existing workloads and virtual machines from previous generation virtualization platforms to OpenStack. It supports migrating VMs, volumes, networks, users, and other resources between OpenStack environments or from platforms like VMware to OpenStack. GUTS has API, scheduler, and migration services to orchestrate the migrations. It can convert disk formats and manage hypervisor-specific tools during the migration process. Future plans include supporting more hypervisors and resource types.
The document discusses Ceph, an open-source software-defined storage platform commonly used with OpenStack. It provides an overview of Ceph attributes, architecture, components like monitors, OSDs and placement groups, and how it can provide unified storage. New features in the recent Ceph Jewel release are also covered, such as RBD mirroring and RADOS gateway improvements. The presentation aims to establish Ceph as the preferred storage solution ("buddy") for OpenStack deployments.
Tempest is the OpenStack integration test suite. It uses unittest and nosetest frameworks to run API calls against OpenStack services like Nova, Glance, Keystone, etc. and validate the responses. Tempest tests include smoke, positive, negative, stress and white box tests. It has a modular structure with common, services, and tests directories. Tempest plays an important role in OpenStack continuous integration by running on proposed code changes to check for regressions.
Who carries your container? Zun or Magnum?Madhuri Kumari
This document summarizes two OpenStack container projects - Magnum and Zun. Magnum provides an API to manage container infrastructure by leveraging Heat, Nova, and Neutron to provision container orchestration engines like Kubernetes and Docker Swarm. Zun provides a container service with APIs for launching and managing containers across different technologies in an integrated manner with OpenStack services like Keystone, Nova, Neutron, Glance, and Cinder. The document compares the two projects and suggests using Magnum when wanting OpenStack to provide infrastructure for self-managed containers, and using Zun when wanting OpenStack to provision and manage containers directly.
This document summarizes a presentation about Open Platform for Network Functions Virtualization (OPNFV). It discusses NFV challenges for telecom operators and introduces OPNFV as an open source platform that aims to develop and test an integrated virtual network functions infrastructure. Key aspects of OPNFV covered include its reference architecture, goals of contributing to relevant open source projects and establishing an NFV ecosystem, and examples of feature development and community labs/testing activities.
The document discusses reverse engineering the firmware of Swisscom's Centro Grande modems. It identifies several vulnerabilities found, including a command overflow issue that allows complete control of the device by exceeding the input buffer, and multiple buffer overflow issues that can be exploited to execute code remotely by crafting specially formatted XML files. Details are provided on the exploitation techniques and timeline of coordination with Swisscom to address the vulnerabilities.
- The document discusses various Linux system log files such as /var/log/messages, /var/log/secure, and /var/log/cron and provides examples of log entries.
- It also covers log rotation tools like logrotate and logwatch that are used to manage log files.
- Networking topics like IP addressing, subnet masking, routing, ARP, and tcpdump for packet sniffing are explained along with examples.
This document discusses the Linux tracing tool systemtap. It provides an overview of systemtap and what it can be used for, including tracing system calls, kernel functions, and application functions. It also discusses how systemtap works, how it uses debugging symbols, and how RPMs handle separate debug information files. Several examples are given of using systemtap probes to trace requests for Nginx, cURL, Redis, MySQL, and TCP retransmissions. The document suggests systemtap can be used beyond C for tracing languages like MySQL, Python and Java.
The document summarizes Maycon Vitali's presentation on hacking embedded devices. It includes an agenda covering extracting firmware from devices using tools like BusPirate and flashrom, decompressing firmware to view file systems and binaries, emulating binaries using QEMU, reverse engineering code to find vulnerabilities, and details four vulnerabilities discovered in Ubiquiti networking devices designated as CVEs. The presentation aims to demonstrate common weaknesses in embedded device security and how tools can be used to analyze and hack these ubiquitous connected systems.
Using Libtracecmd to Analyze Your Latency and Performance TroublesScyllaDB
Trying to figure out why your application is responding late can be difficult, especially if it is because of interference from the operating system. This talk will briefly go over how to write a C program that can analyze what in the Linux system is interfering with your application. It will use trace-cmd to enable kernel trace events as well as tracing lock functions, and it will then go over a quick tutorial on how to use libtracecmd to read the created trace.dat file to uncover what is the cause of interference to you application.
Velocity 2017 Performance analysis superpowers with Linux eBPFBrendan Gregg
Talk by for Velocity 2017 by Brendan Gregg: Performance analysis superpowers with Linux eBPF.
"Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will investigate this new technology, which sooner or later will be available to everyone who uses Linux. The talk will dive deep on these new tracing, observability, and debugging capabilities. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
This document discusses the evolution of systems performance analysis tools from closed source to open source environments.
In the early 2000s with Solaris 9, performance analysis was limited due to closed source tools that provided only high-level metrics. Opening the Solaris kernel code with OpenSolaris in 2005 allowed deeper insight through understanding undocumented metrics and dynamic tracing tools like DTrace. This filled observability gaps across the entire software stack.
Modern performance analysis leverages both traditional Unix tools and new dynamic tracing tools. With many high-resolution metrics available, the focus is on visualization and collecting metrics across cloud environments. Overall open source improved systems analysis by providing full source code access.
USENIX ATC 2017 Performance Superpowers with Enhanced BPFBrendan Gregg
Talk for USENIX ATC 2017 by Brendan Gregg
"The Berkeley Packet Filter (BPF) in Linux has been enhanced in very recent versions to do much more than just filter packets, and has become a hot area of operating systems innovation, with much more yet to be discovered. BPF is a sandboxed virtual machine that runs user-level defined programs in kernel context, and is part of many kernels. The Linux enhancements allow it to run custom programs on other events, including kernel- and user-level dynamic tracing (kprobes and uprobes), static tracing (tracepoints), and hardware events. This is finding uses for the generation of new performance analysis tools, network acceleration technologies, and security intrusion detection systems.
This talk will explain the BPF enhancements, then discuss the new performance observability tools that are in use and being created, especially from the BPF compiler collection (bcc) open source project. These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and much more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations.
Because these BPF enhancements are only in very recent Linux (such as Linux 4.9), most companies are not yet running new enough kernels to be exploring BPF yet. This will change in the next year or two, as companies including Netflix upgrade their kernels. This talk will give you a head start on this growing technology, and also discuss areas of future work and unsolved problems."
pstack, truss etc to understand deeper issues in Oracle databaseRiyaj Shamsudeen
The document discusses various process monitoring and debugging tools for Oracle databases like truss, pstack and pfiles. It provides examples of using truss to trace system calls of processes like PMON and DBWR. It demonstrates how truss can be used to see shared memory segment creation during database startup and process attachment. It also summarizes the process creation steps seen during connection creation in Oracle.
Talk for AWS re:Invent 2014. Video: https://www.youtube.com/watch?v=7Cyd22kOqWc . Netflix tunes Amazon EC2 instances for maximum performance. In this session, you learn how Netflix configures the fastest possible EC2 instances, while reducing latency outliers. This session explores the various Xen modes (e.g., HVM, PV, etc.) and how they are optimized for different workloads. Hear how Netflix chooses Linux kernel versions based on desired performance characteristics and receive a firsthand look at how they set kernel tunables, including hugepages. You also hear about Netflix’s use of SR-IOV to enable enhanced networking and their approach to observability, which can exonerate EC2 issues and direct attention back to application performance.
This document discusses the crash reporting mechanism in Tizen. It describes the crash client, which handles crash signals and generates crash reports. It covers Samsung's crash-work-sdk and Intel's corewatcher crash clients. It also discusses the crash server that receives reports and the CrashDB web interface. Finally, it mentions crash reason location algorithms.
When your whole system is unresponsive, how to investigate on this failure ?
We'll see how to get a memory dump for offline analysis with kdump system.
Then how to analyze it with crash utility.
And finally, how to use crash on a running system to modify the kernel memory (at your own risks !)
Oracle Architecture document discusses:
1. The cost of an Oracle Enterprise Edition license is $47,500 per processor.
2. It provides an overview of key Oracle components like the instance, database, listener and cost based optimizer.
3. It demonstrates how to start an Oracle instance, check active processes, mount and open a database, and query it locally and remotely after starting the listener.
CloudForecast is a system monitoring and visualization tool that uses Perl and RRDTool to collect data from servers and generate graphs. It collects metrics like CPU usage, network traffic, and Gearman worker status. Data is stored in RRD files and a SQLite database. A radar component collects data and a web interface is used to view graphs generated from the collected data.
Building an Automated Behavioral Malware Analysis Environment using Free and ...Jim Clausing
The document describes building an automated malware behavioral analysis environment using free and open-source tools. It details setting up analysis machines running Debian, installing analysis tools including Volatility, RegRipper, and AIDE. Samples are submitted to the machines via SSH and analyzed for network traffic using tools like tcpdump, DNS queries with fauxDNS, and open ports with connections. The results including OS identification, registry changes, and network indicators are summarized for analysts.
This document provides an introduction to DTrace and discusses its key features and capabilities. It covers:
1. What DTrace is and how it can be used to trace operating systems and programs with very low overhead.
2. The different ways DTrace can be used, including tracing system calls, kernel functions, user processes, and custom probes added to programs.
3. How DTrace scripts are structured using probes, filters, and actions. Variables that can be used like timestamps.
4. Examples of using DTrace to trace network activity by probe name, argument definitions, and creating DTrace programs.
This slide will show you how to use SOFA to do performance analysis of CPU/GPU cooperative programs, especially programs running with deep software stacks like TensorFlow, PyTorch, etc.
source code at:
https://github.com/cyliustack/sofa
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
The document discusses hacking the Swisscom modem by exploiting default credentials to gain access. Upon login, the author runs commands to investigate the system such as viewing configuration files and mapping the internal network. Various system details are discovered including the Linux kernel version and software components.
The document summarizes Engine Yard's Partner Junction Program for Q1 2014. It outlines the program's mission to leverage Engine Yard's market leadership through strategic partnerships. It details benefits for partners at different tiers, including access to client usage reports, performance monitoring, project leads, and financial incentives. Joint marketing activities are also described, such as web presence, blog posts, case studies, and co-hosted events. The goal is for partners to engage, execute, and excel with Engine Yard.
Getting Started with PHP on Engine Yard CloudEngine Yard
Topics Covered:
• How to deploy a PHP application to Engine Yard
• How to use Composer to automate dependency management
• The key differences between Orchestra and Engine Yard Cloud
We’re excited to announce that we are evolving our cloud application architecture to be more flexible and modular, giving you greater control of your environment and more choices for components, deployment options and infrastructure.
During this webcast we'll provide more information on Engine Yard Cloud's new cluster model, infrastructure abstraction layer and monitoring and alerting agent, share what's coming and have an open Q&A to answer your questions.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Achieving PCI compliance can be a complex, time-consuming, and expensive undertaking. However, with the right approach it can be substantially less burdensome. In this webcast, we will provide background and recommendations to help you make the best possible decisions regarding PCI for your PaaS-based application. If you currently accept, or are contemplating accepting a payment card on your web application, this webcast is for you.
In this presentation you will learn about:
-An overview of PCI
-How to scope your environment for PCI compliance
-Ways to make compliance more manageable, and
-Things to consider when approaching PCI compliance on a PaaS provider.
To view the full webcast on-demand: http://pages.engineyard.com/an-introduction-to-pci-compliance-on-a-paas.html
Presenter: Danish Khan
Presentation from: RubyConf Uruguay
Date: November 12, 2011
Description:
Most developers hate having to write documentation, yet complain about how tools and libraries we use lack documentation. How do you get developers to write good documentation without feeling like they're wasting their time? There are plenty of good documentation tools out there such as TomDoc, YarDoc, and RDoc. These tools are useful for creating documentation for tools, gems and varies open source projects and each one has it's unique way of making documentation easier for developers. However, how do you manage documentation for a product? At Engine Yard we have our Engine Yard Cloud platform. Good external documentation for our customers is very important to us. We want to make sure they can easily understand how to use our platform and be able to accomplish what they need. However, it has been difficult to get good documentation out quickly.
Check out the audio from Danish's talk here:
http://www.eventials.com/rubyconfuy/recorded/M2UzZTJkMzY2MzdiNTg2NTUxNWM1MzI3NWY1YjRhMzYjIzQ1Ng_3D_3D
Innovate Faster in the Cloud with a Platform as a ServiceEngine Yard
Presentaion: "Innovate Faster in the Cloud with a PaaS" webinar
Presenter: Jacob Lehrbaum
Date: November 18, 2011
Recorded presentation:
http://pages.engineyard.com/InnovateFasterwithPaaS.html
If you are building a new application today you are likely considering a move to the cloud. If so, you should take a careful look at Platform as a Service (PaaS). Using a PaaS makes it fast and easy to deploy and run high-impact applications by relieving the developer from having to integrate, configure, test, and maintain the platform-level software necessary to run applications. It will also improve your uptime, help you scale with your business and can even save you money.
The document is about an introductory lesson on Ruby programming. It discusses Ruby's history and creator Yukihiro Matsumoto. It then covers basic Ruby concepts like variables, methods, and classes through examples. It also provides instructions on installing Ruby on Mac and Windows systems. The overall message is that learning Ruby can be productive and enjoyable.
Hiro Asari's Devoxx 2011 presentation
Presentation description:
Java developers wear many hats: they manage builds, develop applications, write command-line scripts, and must master all tiers. If only there were a way to make these tasks simple and fun.
Enter JRuby.
Build engineers can write or enhance builds with Ruby, never losing a thing they depend on from Ant or Maven. Ruby offers several elegant testing options that work great with JRuby. Web developers can create Rails applications in minutes, effortlessly incorporating the latest Web technologies while taking advantage of the existing Java libraries. JRuby supports binding native libraries with FFI (foreign function interface). Command-line scripts? They're easy with JRuby's system-level features.
Come to this session to learn how JRuby makes you a happy developer.
High Performance Ruby: Evented vs. ThreadedEngine Yard
The document discusses the differences between evented and threaded concurrency models for Ruby applications. It explains that evented concurrency handles I/O events asynchronously while threaded concurrency uses threads to perform actual work. The document recommends using an evented model with libraries like Nginx and Trinidad to serve web applications, allowing code to be written as if it were threaded for simplicity.
Release Early & Release Often: Reducing Deployment FrictionEngine Yard
Andy Delcambre's RubyConf 2011 presentation
Presentation Description:
At Engine Yard, we release the main Engine Yard Cloud code base at least once a day, many times more often than that. Yet we still have a fairly rigorous testing and release process. We have simply automated and connected as much of the process as possible. This talk covers how we handle deployments, how it ties in with our continuous integration service, and how we automate and tie it all together.
Recorded presentation:
http://confreaks.net/videos/667-rubyconf2011-release-early-and-release-often-reducing-deployment-friction
This document provides an overview of JRuby, highlighting both advantages and disadvantages compared to Ruby implementations. Key points include:
- JRuby runs Ruby code on the Java Virtual Machine (JVM), allowing access to Java libraries and tools while retaining Ruby syntax and semantics.
- The memory footprint of JRuby applications is initially larger than CRuby due to object sizes, but memory usage over time can be smaller with JRuby's garbage collection.
- Features like fork, continuations, and some extensions may be missing or disabled in JRuby.
- JRuby provides multiple Ruby versions and allows running multiple Ruby applications in a single JVM process.
- Performance benchmarks show JRuby can be competitive with
Rubinius is an implementation of the Ruby programming language that aims to improve on MRI (the standard Ruby interpreter) by making Ruby faster and more memory efficient. It does this by compiling Ruby code to machine instructions for faster execution, using more lightweight objects that use less memory, and having a more advanced garbage collector. The presenters discuss Rubinius' goals of providing a drop-in replacement for MRI with better performance and memory usage in order to allow Ruby to be used for more and larger applications. They demonstrate how Rubinius can improve the scalability of Ruby applications.
Rails Antipatterns | Open Session with Chad Pytel Engine Yard
As developers worldwide have adopted the Ruby on Rails web framework, many have fallen victim to common mistakes that reduce code quality, performance, reliability, stability, scalability, and maintainability. Even experienced developers will find that they can reevaluate the work they've done and make it better.
In this session, Chad Pytel will provide an overview of some of these common mistakes as well as take questions from the audience and provide real-world advice. Bring your issues and get expert advice on how to bring your code in line with today's best practices.
The document discusses JRuby, an implementation of the Ruby programming language that runs on the Java Virtual Machine. It provides an overview of JRuby, including its goals of helping with day-to-day development. The author introduces himself and his experience with Ruby, Java, and JRuby. He then covers topics like installing JRuby, integrating Java and Ruby code, using JRuby for scripting and build tools, and testing with JRuby. Web development with Ruby on Rails on JRuby is also mentioned.
The document discusses developing a programming language called Prattle. It provides examples of the language's syntax for elements like self, true, false, nil, numbers, strings, unary sends, keyword sends, blocks, and operators. It also describes running code in Prattle using a REPL and compiling Prattle code to Ruby bytecode.
The document discusses using the Fog gem to interact with cloud infrastructure providers through a unified interface. It provides examples of using Fog to get a list of providers and services, retrieve and create resources using collections and models, and execute requests directly against cloud APIs. Reader exercises demonstrate bootstrapping a server and executing SSH commands on the server instance.
The document discusses Rubinius, an implementation of the Ruby programming language. It describes how Rubinius compiles source code to bytecode and executes it using a virtual machine and just-in-time compiler. It highlights features like debugging tools, documentation, and upcoming support for concurrency and Ruby 1.9. It also notes how Rubinius has inspired other Ruby projects and tools.
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Mitigating the Impact of State Management in Cloud Stream Processing SystemsScyllaDB
Stream processing is a crucial component of modern data infrastructure, but constructing an efficient and scalable stream processing system can be challenging. Decoupling compute and storage architecture has emerged as an effective solution to these challenges, but it can introduce high latency issues, especially when dealing with complex continuous queries that necessitate managing extra-large internal states.
In this talk, we focus on addressing the high latency issues associated with S3 storage in stream processing systems that employ a decoupled compute and storage architecture. We delve into the root causes of latency in this context and explore various techniques to minimize the impact of S3 latency on stream processing performance. Our proposed approach is to implement a tiered storage mechanism that leverages a blend of high-performance and low-cost storage tiers to reduce data movement between the compute and storage layers while maintaining efficient processing.
Throughout the talk, we will present experimental results that demonstrate the effectiveness of our approach in mitigating the impact of S3 latency on stream processing. By the end of the talk, attendees will have gained insights into how to optimize their stream processing systems for reduced latency and improved cost-efficiency.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
14. lsof -nPp <pid>
-n
Inhibits the conversion of network numbers to host names.
-P
Inhibits the conversion of port numbers to names for network files
FD TYPE NAME json
cwd DIR /var/www/myapp memcached
txt REG /usr/bin/ruby mysql
mem REG /json-1.1.9/ext/json/ext/generator.so http
mem REG /json-1.1.9/ext/json/ext/parser.so
mem REG /memcached-0.17.4/lib/rlibmemcached.so
mem REG /mysql-2.8.1/lib/mysql_api.so
0u CHR /dev/null
1w REG /usr/local/nginx/logs/error.log
2w REG /usr/local/nginx/logs/error.log
3u IPv4 10.8.85.66:33326->10.8.85.68:3306 (ESTABLISHED)
10u IPv4 10.8.85.66:33327->10.8.85.68:3306 (ESTABLISHED)
11u IPv4 127.0.0.1:58273->127.0.0.1:11211 (ESTABLISHED)
12u REG /tmp/RackMultipart.28957.0
33u IPv4 174.36.83.42:37466->69.63.180.21:80 (ESTABLISHED)
15. STRACE
trace system calls and signals
strace -cp <pid>
strace -ttTp <pid> -o <file>
16. strace -cp <pid>
-c
Count time, calls, and errors for each system call and report a
summary on program exit.
-p pid
Attach to the process with the process ID pid and begin tracing.
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
50.39 0.000064 0 1197 592 read
34.65 0.000044 0 609 writev
14.96 0.000019 0 1226 epoll_ctl
0.00 0.000000 0 4 close
0.00 0.000000 0 1 select
0.00 0.000000 0 4 socket
0.00 0.000000 0 4 4 connect
0.00 0.000000 0 1057 epoll_wait
------ ----------- ----------- --------- --------- ----------------
100.00 0.000127 4134 596 total
17. strace -ttTp <pid> -o <file>
-t
Prefix each line of the trace with the time of day.
-tt
If given twice, the time printed will include the microseconds.
-T
Show the time spent in system calls.
-o filename
Write the trace output to the file filename rather than to stderr.
epoll_wait(9, {{EPOLLIN, {u32=68841296, u64=68841296}}}, 4096, 50) = 1 <0.033109>
accept(10, {sin_port=38313, sin_addr="127.0.0.1"}, [1226]) = 22 <0.000014>
fcntl(22, F_GETFL) = 0x2 (flags O_RDWR) <0.000007>
fcntl(22, F_SETFL, O_RDWR|O_NONBLOCK) = 0 <0.000008>
setsockopt(22, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.000008>
accept(10, 0x7fff5d9c07d0, [1226]) = -1 EAGAIN <0.000014>
epoll_ctl(9, EPOLL_CTL_ADD, 22, {EPOLLIN, {u32=108750368, u64=108750368}}) = 0 <0.000009>
epoll_wait(9, {{EPOLLIN, {u32=108750368, u64=108750368}}}, 4096, 50) = 1 <0.000007>
read(22, "GET / HTTP/1.1r"..., 16384) = 772 <0.000012>
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 <0.000007>
poll([{fd=5, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout) <0.000008>
write(5, "1000000-0003SELECT * FROM `table`"..., 56) = 56 <0.000023>
read(5, "25101,20x234m"..., 16384) = 284 <1.300897>
21. stracing ruby: sigprocmask
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.326334 0 3568567 rt_sigprocmask
0.00 0.000000 0 9 read
0.00 0.000000 0 10 open
0.00 0.000000 0 10 close
0.00 0.000000 0 9 fstat
0.00 0.000000 0 25 mmap
------ ----------- ----------- --------- --------- ----------------
100.00 0.326334 3568685 0 total
• debian/redhat compile ruby with --enable-pthread
• uses a native thread timer for SIGVTALRM
• causes excessive calls to sigprocmask: 30% slowdown!
22. TCPDUMP
dump traffic on a network
tcpdump -i eth0 -s 0 -nqA
tcp dst port 3306
23. tcpdump -i <eth> -s <len> -nqA <expr>
tcpdump -i <eth> -w <file> <expr>
-i <eth>
Network interface.
-s <len>
Snarf len bytes of data from each packet.
-n
Don't convert addresses (host addresses, port numbers) to names.
-q
Quiet output. Print less protocol information.
-A
Print each packet (minus its link level header) in ASCII.
-w <file>
Write the raw packets to file rather than printing them out.
<expr>
libpcap expression, for example:
tcp src port 80
tcp dst port 3306
24. tcp dst port 80
19:52:20.216294 IP 24.203.197.27.40105 >
174.37.48.236.80: tcp 438
E...*.@.l.%&.....%0....POx..%s.oP.......
GET /poll_images/cld99erh0/logo.png HTTP/1.1
Accept: */*
Referer: http://apps.facebook.com/realpolls/?
_fb_q=1
25. tcp dst port 3306
19:51:06.501632 IP 10.8.85.66.50443 >
10.8.85.68.3306: tcp 98
E..."K@.@.Yy
.UB
.UD.....z....L............
GZ.y3b..[......W....
SELECT * FROM `votes` WHERE (`poll_id` =
72621) LIMIT 1
34. require 'sinatra'
$ ab -c 1 -n 50 http://127.0.0.1:4567/compute
$ ab -c 1 -n 50 http://127.0.0.1:4567/sleep
get '/sleep' do
sleep 0.25
'done' • Sampling profiler:
end
• 232 samples total
get '/compute' do • 83 samples were in /compute
proc{ |n|
a,b=0,1 • 118 samples had /compute on
the stack but were in
n.times{ a,b = b,a+b } another function
b
}.call(10_000) • /compute accounts for 50%
'done' of process, but only 35% of
time was in /compute itself
end
== Sinatra has ended his set (crowd applauds)
PROFILE: interrupts/evictions/bytes = 232/0/2152
Total: 232 samples
83 35.8% 35.8% 118 50.9% Sinatra::Application#GET /compute
56 24.1% 59.9% 56 24.1% garbage_collector
35 15.1% 75.0% 113 48.7% Integer#times
43. ltrace -F <conf> -b -g -x <sym>
-b
Ignore signals.
-g
Ignore libraries linked at compile time.
-F <conf>
Read prototypes from config file.
-x <sym>
Trace calls to the function sym.
-s <num>
Show first num bytes of string args.
-F ltrace.conf
int mysql_real_query(addr,string,ulong);
void garbage_collect(void);
int memcached_set(addr,string,ulong,string,ulong);
47. GDB
the GNU debugger
gdb <executable>
gdb attach <pid>
48. Debugging Ruby Segfaults
test_segv.rb:4: [BUG] Segmentation fault
ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin9.7.0]
def test
#include "ruby.h" require 'segv'
4.times do
VALUE Dir.chdir '/tmp' do
segv() Hash.new{ segv }[0]
{ end
VALUE array[1]; end
array[1000000] = NULL; end
return Qnil;
} sleep 10
test()
void
Init_segv()
{
rb_define_method(rb_cObject, "segv", segv, 0);
}
49. 1. Attach to running process
$ ps aux | grep ruby
joe 23611 0.0 0.1 25424 7540 S Dec01 0:00 ruby test_segv.rb
$ sudo gdb ruby 23611
Attaching to program: ruby, process 23611
0x00007fa5113c0c93 in nanosleep () from /lib/libc.so.6
(gdb) c
Continuing.
Program received signal SIGBUS, Bus error.
segv () at segv.c:7
7 array[1000000] = NULL;
2. Use a coredump
Process.setrlimit Process::RLIMIT_CORE, 300*1024*1024
$ sudo mkdir /cores
$ sudo chmod 777 /cores
$ sudo sysctl kernel.core_pattern=/cores/%e.core.%s.%p.%t
$ sudo gdb ruby /cores/ruby.core.6.23611.1259781224
50. def test
require 'segv'
4.times do
Dir.chdir '/tmp' do
Hash.new{ segv }[0]
end
end (gdb) where
end #0 segv () at segv.c:7
#1 0x000000000041f2be in call_cfunc () at eval.c:5727
test() ...
#13 0x000000000043ba8c in rb_hash_default () at hash.c:521
...
#19 0x000000000043b92a in rb_hash_aref () at hash.c:429
...
#26 0x00000000004bb7bc in chdir_yield () at dir.c:728
#27 0x000000000041d8d7 in rb_ensure () at eval.c:5528
#28 0x00000000004bb93a in dir_s_chdir () at dir.c:816
...
#35 0x000000000041c444 in rb_yield () at eval.c:5142
#36 0x0000000000450690 in int_dotimes () at numeric.c:2834
...
#48 0x0000000000412a90 in ruby_run () at eval.c:1678
#49 0x000000000041014e in main () at main.c:48
56. def test
require 'segv'
4.times do
Dir.chdir '/tmp' do
Hash.new{ segv }[0]
end
end
end
(gdb) ruby threads
test()
0xa3e000 main curr thread THREAD_RUNNABLE WAIT_NONE
node_vcall segv in test_segv.rb:5
node_call test in test_segv.rb:5
node_call call in test_segv.rb:5
node_call default in test_segv.rb:5
node_call [] in test_segv.rb:5
node_call test in test_segv.rb:4
node_call chdir in test_segv.rb:4
node_call test in test_segv.rb:3
node_call times in test_segv.rb:3
node_vcall test in test_segv.rb:9
58. mongrel sleeper thread
0x16814c00 thread THREAD_STOPPED WAIT_TIME(0.47) 1522 bytes
node_fcall sleep in lib/mongrel/configurator.rb:285
node_fcall run in lib/mongrel/configurator.rb:285
node_fcall loop in lib/mongrel/configurator.rb:285
node_call run in lib/mongrel/configurator.rb:285
node_call initialize in lib/mongrel/configurator.rb:285
node_call new in lib/mongrel/configurator.rb:285
node_call run in bin/mongrel_rails:128
node_call run in lib/mongrel/command.rb:212
node_call run in bin/mongrel_rails:281
node_fcall (unknown) in bin/mongrel_rails:19
def run
@listeners.each {|name,s|
s.run
}
$mongrel_sleeper_thread = Thread.new { loop { sleep 1 } }
end
59. god memory leaks
(gdb) ruby objects arrays 43 God::Process
elements instances 43 God::Watch
94310 3 43 God::Driver
94311 3 43 God::DriverEventQueue
94314 2 43 God::Conditions::MemoryUsage
94316 1 43 God::Conditions::ProcessRunning
43 God::Behaviors::CleanPidFile
5369 arrays 45 Process::Status
2863364 member elements 86 God::Metric
327 God::System::SlashProcPoller
many arrays with 327 God::System::Process
90k+ elements! 406 God::DriverEvent
5 separate god leaks fixed by Eric
Lindvall with the help of gdb.rb!
60. MEMPROF
a heap visualizer for ruby
gem install memprof
open http://memprof.com
github.com/ice799/memprof
62. Memprof.track{
100.times{ "abc" }
100.times{ 1.23 + 1 }
100.times{ Module.new }
}
100 file.rb:2:String
100 file.rb:3:Float
100 file.rb:4:Module
• like bleak_house, but for a given block of code
• use Memprof::Middleware in your webapps to run track
per request
64. Memprof.dump{
strings }
"hello" + "world"
{
"_id": "0x19c610", memory address of object
"file": "file.rb", file and line where string
"line": 2,
was created
"type": "string",
"class": "0x1ba7f0", address of the class
"class_name": "String", “String”
"length": 10, length and contents
"data": "helloworld" of this string instance
}
65. arrays
Memprof.dump{
[
1,
:b,
{
"_id": "0x19c5c0",
2.2,
"d"
"class": "0x1b0d18", ]
"class_name": "Array", }
"length": 4,
"data": [
1, integers and symbols are
":b", stored in the array itself
"0x19c750", floats and strings are
"0x19c598" separate ruby objects
]
}
69. Memprof.dump_all("myapp_heap.json")
• dump out every single live object as json
• one per line to specified file
• analyze via
• jsawk/grep
• mongodb/couchdb
• custom ruby scripts
• libyajl + Boost Graph Library
72. plugging a leak in rails3
• in dev mode, rails3 is leaking 10mb per request
let’s use memprof to find it!
# in environment.rb
require `gem which memprof/signal`.strip
73. plugging a leak
in rails3
send the app some
requests so it leaks
$ ab -c 1 -n 30
http://localhost:3000/
tell memprof to dump
out the entire heap to
json
$ memprof
--pid <pid>
--name <dump name>
--key <api key>
74. 2519 classes
30 copies of
TestController
mongo query for all
TestController classes
details for one copy of
TestController
75. find references to object
“leak” is on line 178
holding references
to all controllers
76. • In development mode, Rails reloads all your
application code on every request
• ActionView::Partials::PartialRenderer is caching
partials used by each controller as an optimization
• But.. it ends up holding a reference to every single
reloaded version of those controllers
77. MORE* MEMPROF
FEATURES
• memprof.trace
• memprof::tracer
* currently under development
78. config.middleware.use(Memprof::Tracer)
{
"time": 4.3442, total time for request
"rails": { rails controller/action
"controller": "test",
"action": "index"
},
"request": { request env info
"REQUEST_PATH": "/test",
"REQUEST_METHOD": "GET"
},
85. EY CASE STUDY
• Example of Professional Services engagement
• Interested? http://tinyurl.com/eyfast
http://tinyurl.com/eyfast
86. LSOF
15u IPv4 TCP ->10.243.63.80:11211 (ESTABLISHED)
20u IPv4 TCP ->10.243.63.80:11211 (ESTABLISHED)
23u IPv4 TCP ->10.243.63.80:11211 (ESTABLISHED)
multiple connections to memcached
via different drivers
18r DIR /shared/public/javascripts/packaged
19r DIR /shared/public/javascripts/packaged
22r DIR /shared/public/javascripts/packaged
multiple open handles to javascript
assets directories
87. STRACE
% time seconds calls syscall
------ ----------- --------- ---------
26.28 0.054178 8731 read
25.81 0.053216 316519 stat
20.37 0.041993 1 clone
15.83 0.032648 11034 getdents
3.54 0.007309 10326 write
40% of kernel time querying
filesystem
88. LTRACE
mysql_query("SELECT * FROM tags WHERE id = 9129")
mysql_query("SELECT * FROM tags WHERE id = 9129")
mysql_query("SELECT * FROM tag_info WHERE tag_id
= 9129")
mysql_query("SELECT * FROM tags WHERE id = 9129")
mysql_query("SELECT * FROM tags WHERE id = 9129")
mysql_query("SELECT * FROM tags WHERE id = 9129")
common queries repeated multiple
times during a request
89. GDB.RB
(gdb) ruby objects
HEAPS 10
SLOTS 4450607
LIVE 2006769 (45.09%)
FREE 2443838 (54.91%) large ruby heap
hash 12939 (0.64%) numerous 'node'
array 38007 (1.89%) objects- required
object 47633 (2.37%) libraries and plugins
string 234205 (11.67%) that are no longer in
node 1637926 (81.62%) use
91. MEMPROF
wide distribution of response times
expensive controller actions creating
many millions of objects per request
92. MEMPROF
37% of
response time
attributed to
garbage
collection
93. BEFORE
Time taken for tests: 20.156 seconds
Complete requests: 100
Requests per second: 4.96 [#/sec] (mean)
Time per request: 201.555 [ms] (mean)
AFTER
Time taken for tests: 8.100 seconds
Complete requests: 100
Requests per second: 12.35 [#/sec] (mean)
Time per request: 81.000 [ms] (mean)
2.5x improvement in throughput after addressing
some of these issues and tuning GC
94. QUESTIONS?
Aman Gupta
@tmm1
http://scr.bi/debuggingruby