This document provides an introduction to DTrace and discusses its key features and capabilities. It covers:
1. What DTrace is and how it can be used to trace operating systems and programs with very low overhead.
2. The different ways DTrace can be used, including tracing system calls, kernel functions, user processes, and custom probes added to programs.
3. How DTrace scripts are structured using probes, filters, and actions. Variables that can be used like timestamps.
4. Examples of using DTrace to trace network activity by probe name, argument definitions, and creating DTrace programs.
Artimon is a scalable metrics collection and analysis framework. It collects metrics called 'variable instances' that have a name, labels, and timestamped values. Metrics can be exported via a Thrift service and stored in distributed systems like Kafka for later analysis using Groovy scripts. Artimon is designed to collect both IT and business metrics and can adapt to collect from third party sources using agents.
The OSI Superboard II was the computer on which I first learned to program back in 1979. Python is why programming remains fun today. In this tale of old meets new, I describe how I have used Python 3 to create a cloud computing service for my still-working Superboard--a problem complicated by it only having 8Kb of RAM and 300-baud cassette tape audio ports for I/O.
This document summarizes the key capabilities of Warp 10, a time series data ingestion, processing, and visualization platform:
1. Warp 10 can ingest high volumes of time series data from sensors and other IoT devices via HTTP, WebSockets, and many collection tools in a performant manner.
2. It provides a feature-rich scripting language called WarpScript that allows users to manipulate, analyze, and transform ingested time series data using over 690 functions and frameworks.
3. Warp 10 includes tools to visualize time series data in real-time through widgets that can display charts, images, and more generated from WarpScript. Dynamic tile widgets also enable building configurable
Anchoring Trust: Rewriting DNS for the Semantic Network with Ruby and RailsEleanor McHugh
This document provides an overview of semantic networking and the Rindr DNS application server. It discusses how semantic networking goes beyond traditional DNS by embedding rich associations and being service-oriented. It also introduces Rindr as a Ruby DNS server that extends DNS with a relational data model and supports identity and access management. Rindr integrates with Rails using BackgroundRB to allow non-blocking network access from Rails.
Device-specific Clang Tooling for Embedded SystemsemBO_Conference
This document discusses using Clang tooling to refactor raw memory accesses in embedded C code to be type-safe and readable. It involves parsing a CMSIS SVD file to get the device memory map, writing AST matchers to find raw memory accesses, and generating fix-its to refactor the accesses. The tool is implemented as a Clang-tidy check for static analysis and refactoring at compile-time. Challenges include handling cases where register offsets cannot be evaluated statically and designing the tools to work with existing code patterns.
The document provides an overview of installing and using the Network Simulator 2 (NS2). It discusses downloading and extracting NS2, setting up the Linux environment, understanding the basic NS2 architecture and directory structure. It also covers the differences between OTCL and C++ in NS2, and provides examples of creating a simple agent module in C++ and interfacing it with OTCL. The document includes a case study of building a multimedia application over UDP using NS2 that implements five different encoding and transmission rates.
Ns is a network simulator developed at UC Berkeley and elsewhere that allows modeling of TCP/IP networks and wireless networks using C++ and OTcl. It provides objects for nodes, links, network traffic and wireless channel modeling. The document outlines how to install ns, create basic simulations with nodes and traffic, and extend it for wireless simulations using various protocols.
A short and fast journey through some of the profiling options available in the Ruby 2.x world, including a look at flamegraphs and new ways of tracking memory usage in the MRI.
This document defines options and sets up a simulation to test carrier sense in NS-2. It defines wireless channel, radio propagation, and MAC layer options. It creates 4 nodes with an 802.11 MAC and positions two nodes to have a conversation and the other two nodes some distance away to have another conversation. It generates CBR traffic between the node pairs and runs the simulation for 10 seconds.
This document provides an overview of iptables and Linux firewall configuration. It discusses Netfilter hooks and stages, stateless and stateful firewall rules using iptables, logging rules, the tables (filter, nat, mangle, raw) and built-in chains, creating custom chains, using ipsets for constant-time lookups, and useful iptables commands. It also briefly mentions using libnetfilter_queue to divert traffic to userspace applications and provides references for further reading on Linux firewalls and Netfilter.
This document discusses real-time operating systems (RTOS) and embedded systems. It provides an overview of RTOS concepts like tasks, memory management, timers, I/O, and inter-process communication. It also describes the author's work developing several RTOS projects over the years including Orz Microkernel, RT nanokernel, Jamei RTOS, and CuRT. Examples of using RTOS for applications in areas like industrial automation, wireless sensor networks, and embedded devices are also mentioned.
This presentation from PrismTech's Spectra SDR CTO, Dr Vince Kovarik, presents independent research in the area of hybrid programming languages incorporating traditional object-oriented capabilities integrated with knowledge representation and reasoning enabling the ability for a communications system to introspect its physical and logical structure. The long-term vision of this work is to provide the ability to represent relationships and events as first-class objects in the system thereby providing context and heuristic capabilities.
Вениамин Гвоздиков: Особенности использования DTrace Yandex
Вначале рассмотрим возможности DTrace для трассировки приложений в реальном времени. Затем перейдём на особенности его использования на разных ОС и их ограничения. В конце будут приведены задания с динамически создаваемыми пробами для прикладной разработки на LUA в Tarantool.
Error Control in Multimedia Communications using Wireless Sensor Networks reportMuragesh Kabbinakantimath
This document summarizes a seminar report that evaluates the performance of different error control techniques for multimedia communications over wireless sensor networks. It describes using the ns-2 simulator along with a video quality analysis tool to implement and test an Enhanced Adaptive FEC algorithm that dynamically adds redundant packets based on network traffic load and wireless channel state in order to improve video delivery quality over wireless networks. The simulation setting involves video, FTP, and exponential traffic flows transmitted over wired and wireless links between nodes including a video server, access point, and wireless receivers.
This document provides an overview of basic commands and functions for constructing, sending, receiving, and analyzing packets using Scapy. It summarizes key Scapy commands for listing available protocols and functions, configuring parameters, building packets by specifying addresses, ports, and layer values, sending and receiving packets on different interfaces, capturing live packets, and fuzzing packet fields. The document is a quick reference for common Scapy tasks.
Specializing the Data Path - Hooking into the Linux Network StackKernel TLV
Ever needed to add your custom logic into the network stack?
Ever hacked the network stack but wasn't certain you're doing it right?
Shmulik Ladkani talks about various mechanisms for customizing packet processing logic to the network stack's data path.
He covers covering topics such as packet sockets, netfilter hooks, traffic control actions and ebpf. We will discuss their applicable use-cases, advantages and disadvantages.
Shmulik Ladkani is a Tech Lead at Ravello Systems.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
51966 coffees and billions of forwarded packets later, with millions of homes running his software, Shmulik left his position as Jungo’s lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud service. He's now focused around virtualization systems, network virtualization and SDN.
Проблемы использования TCP в мобильных приложениях. Владимир КирилловAnthony Marchenko
This document discusses capturing network traffic on iOS and Android devices using various tools like tcpdump, tcpdump, and tcptrace. It provides examples of using tcpdump to capture traffic from an iPhone to a remote host and analyzing the captured traffic file using tcptrace. It also shows how to capture traffic from an Android device by connecting via ADB and using tcpdump directly on the device.
The document discusses Linux device trees and how they are used to describe hardware configurations. Some key points:
- A device tree is a data structure that describes hardware connections and configurations. It allows the same kernel to support different hardware.
- Device trees contain nodes that represent devices, with properties like compatible strings to identify drivers. They describe things like memory maps, interrupts, and bus attachments.
- The kernel uses the device tree passed by the bootloader to identify and initialize hardware. Drivers match based on compatible properties.
- Device tree files with .dts extension can be compiled to binary blobs (.dtb) and overlays (.dtbo) used at boot time to describe hardware.
Building Network Functions with eBPF & BCCKernel TLV
eBPF (Extended Berkeley Packet Filter) is an in-kernel virtual machine that allows running user-supplied sandboxed programs inside of the kernel. It is especially well-suited to network programs and it's possible to write programs that filter traffic, classify traffic and perform high-performance custom packet processing.
BCC (BPF Compiler Collection) is a toolkit for creating efficient kernel tracing and manipulation programs. It makes use of eBPF.
BCC provides an end-to-end workflow for developing eBPF programs and supplies Python bindings, making eBPF programs much easier to write.
Together, eBPF and BCC allow you to develop and deploy network functions safely and easily, focusing on your application logic (instead of kernel datapath integration).
In this session, we will introduce eBPF and BCC, explain how to implement a network function using BCC, discuss some real-life use-cases and show a live demonstration of the technology.
About the speaker
Shmulik Ladkani, Chief Technology Officer at Meta Networks,
Long time network veteran and kernel geek.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
Some billions of forwarded packets later, Shmulik left his position as Jungo's lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud-based service, focusing around virtualization systems, network virtualization and SDN.
Recently he co-founded Meta Networks where he's been busy architecting secure, multi-tenant, large-scale network infrastructure as a cloud-based service.
Benchmarking Oracle I/O Performance with Orion by Alex GorbachevAlex Gorbachev
Orion is a tool for benchmarking Oracle I/O performance. It generates I/O workloads similar to database patterns and measures I/O performance. Orion stresses the I/O subsystem without significant CPU usage. It collects multiple data points by running tests with varying levels of concurrent small and large I/Os. Orion is useful for infrastructure tuning when requirements are unknown, and for capacity planning when requirements are known.
The document discusses Conversation and ORM in Seam. It summarizes that conversations keep objects linked across pages to maintain state transparently. Conversations propagate state without issues of concurrent access and support back buttons. The lifecycle of conversations is managed through Java annotations like @Begin and @End. Nested conversations maintain a stack without affecting injected values. Page flows in Seam use XML to control conversation propagation through pages using begin, end and redirect rules.
JPoint'15 Mom, I so wish Hibernate for my NoSQL database...Alexey Zinoviev
Alexey Zinoviev presented this paper on the JPoint'15 conference javapoint.ru/talks/#zinoviev.
This paper covers next topics: Java, JPA, Morphia, Hibernate OGM, Spring Data, Hector, Kundera, NoSQL, Mongo, Cassandra, HBase, Riak
DTrace is a dynamic tracing framework created by Sun Microsystems to provide operational insights into applications and operating systems with minimal overhead. It allows tens of thousands of probes to be enabled with no performance impact when disabled, and traces system calls, kernels, userland processes, and Java Virtual Machines to help tune and troubleshoot performance.
This presentation considers certain specific features of C++11 and additions to STL library (uniform initialization, new containers and methods, move semantics).
Presentation by Taras Protsiv (Software Engineer, GlobalLogic), Kyiv, delivered at GlobalLogic C++ TechTalk in Lviv, September 18, 2014.
More details -
http://www.globallogic.com.ua/press-releases/lviv-cpp-techtalk-coverage
Este documento introduce DTrace, una herramienta de depuración para sistemas operativos Solaris y OpenSolaris. Explica qué es DTrace, cómo funciona a través de puntos de instrumentación llamados "probes" y "providers", y provee ejemplos de su uso para medir el rendimiento de aplicaciones y detectar problemas. También describe una interfaz gráfica llamada CHIME que permite visualizar los datos recopilados por DTrace.
This document introduces Solaris DTrace, a dynamic tracing framework for the Solaris operating system. It provides an overview of DTrace architecture and components, how to use DTrace probes and actions, and exercises demonstrating how to use DTrace to troubleshoot system calls, memory allocations, and I/O usage.
The document discusses various aspects of template type deduction in C++ including:
1. How the type T and the type of the parameter ParamType are deduced based on whether ParamType is a pointer/reference, universal reference, or neither.
2. How array arguments are handled differently when passed by value versus by reference in templates.
3. How function arguments are treated, with function types decaying to function pointers.
4. The differences between auto type deduction and template type deduction.
5. How decltype can be used to deduce types from expressions while preserving reference-ness.
Мастер-класс по BigData Tools для HappyDev'15Alexey Zinoviev
Данила, BigData Tool Master,
собрал Hadoop - кластер,
Запустил Dataset
Он скрипты на Scala
Run'ил на Spark постоянно
И писал в HDFSssss
Если во время доклада "Когда все данные станут большими..." мы будем говорить о вопросах и ответах, то на этом мастер-классе мы уже потопчемся в вотчине BigData-разработчиков.
Начнем с классики на Hadoop, познаем боль MapReduce job, потыкаем Pig + Hive, затем плавно свальсируем в сторону Spark и попишем код в легком и удобном pipeline - стиле.
Для кого хорошо подходит данный мастер-класс: вы умеете читать и понимать код на Java на уровне хотя бы Junior, умеете писать SQL-запросы, в универе вы ходили хоть на одну пару по матану или терверу, вас либо недавно поставили, либо вскоре поставят на проект, где надо уметь ручками работать с вышеперечисленным зверинцем. Ну или вам просто интересно посмотреть на мощь даннодробилок, написанных на Java, и у вас в анамнезе неудачный опыт с NoSQL/SQL, как хранилищем, которое было ответственно за все, включая аналитику.
Java BigData Full Stack Development (version 2.0)Alexey Zinoviev
This document is a presentation by Alexey Zinovyev about Java Big Data full stack development. It discusses Alexey's background and contacts, required skills for Java Big Data development like SQL, Linux, Java and backend skills. It then covers topics like NoSQL databases, Hadoop, Spark, machine learning with MLlib and deep learning. It provides different ways to learn these topics including books, online courses, conferences and mentoring. It encourages learning through hands-on projects and recommends starting with tools like Weka, MongoDB, Hadoop and AWS.
Introduction to DTrace (Dynamic Tracing), written by Brendan Gregg and delivered in 2007. While aimed at a Solaris-based audience, this introduction is still largely relevant today (2012). Since then, DTrace has appeared in other operating systems (Mac OS X, FreeBSD, and is being ported to Linux), and, many user-level providers have been developed to aid tracing of other languages.
[db tech showcase Tokyo 2016] E34: Oracle SE - RAC, HA and Standby are Still ...Insight Technology, Inc.
Standard Edition (SE) is alive and well – maybe it had some growing pains over the last year, BUT it is here to stay! SE is a powerful database albeit with some limitations. whether it is using a Cloud based environment or on premise. In this session we will discuss Oracle SE and review some of the recent changes and the introduction of the new kid on the block – Standard Edition 2 (SE2). Topics that will be discussed include moving between Editions, High Availability, Disaster Recovery as well as Backup and Recovery.
Wie geht ein Unternehmen im Zeitalter des Web 2.0 mit riesigen, unstrukturierten Datenmengen um? Dank einer Einladung der grössten Internetagentur der Schweiz, Namics, durften wir zu diesem brandaktuellen Thema am 09.09.2011 im Rahmen ihres alljährlichen Weiterbildungsevents referieren. Unser Architect Christian Gügi sprach über das Thema “Big Data im Unternehmenseinsatz mit Hadoop”.
Zum Inhalt:
Überall auf der Welt trafen sich zum NoSQL Summer 2010 Interessierte, um Papers zum Thema NoSQL zu lesen, zu verstehen und zu diskutieren. Dazu zählten insbesondere die Papers über Google’s Chubby, MapReduce & BigTable aus dem Jahr 2006, aber auch Cassandra (Facebook), (Dynamo) Amazon, Hadoop (Apache) uvm. In der Zwischenzeit hat sich das Themengebiet ausgedehnt, ein Markt wächst, immer mehr Produkte etablieren sich und viele Unternehmen greifen das Thema auf. NoSQL ist kein Buzz mehr. Aber was versteht man unter NoSQL, wann und wofür wird es eingesetzt und welche Produkte gibt es? Im Vortrag werden diese Fragestellungen anhand von Hadoop und Lily erläutert und damit der Bogen zu aktuellen Content Management Systemen geschlagen.
This document provides an overview of NoSQL databases Cassandra and MongoDB. It begins with an introduction to RDBMS and discusses the need for NoSQL databases in terms of handling big data. Key concepts covered include the CAP theorem, data models of Cassandra and MongoDB, replication, and automatic failover. The document concludes by emphasizing the usefulness of NoSQL for availability and processing unstructured data at scale.
This document provides an overview of kernel debugging on Solaris systems using the modular debugger Mdb and dynamic tracing framework DTrace. It discusses debugging live kernels with Mdb, analyzing system crash dumps with Mdb, and using DTrace to monitor the kernel at runtime by enabling probes published by different providers. The document outlines the key tools, techniques, and challenges involved in kernel debugging and crash analysis on Solaris.
CONFidence 2015: DTrace + OSX = Fun - Andrzej Dyjak PROIDEA
This document summarizes a presentation about using DTrace on OS X. It introduces DTrace as a dynamic tracing tool for user and kernel space. It discusses the D programming language used for writing DTrace scripts, including data types, variables, operators, and actions. Example one-liners and scripts are provided to demonstrate syscall tracking, memory allocation snooping, and hit tracing. The presentation outlines some past security work using DTrace and similar dynamic tracing tools. It concludes with proposing future work like more kernel and USDT tracing as well as Python bindings for DTrace.
This document discusses various Linux debugging tools including:
1. SIMD, cache monitoring, firmware checks, NUMA memory, interrupts using tools like lstopo, ethtool, lspci, and lshw.
2. Using GDB for debugging with features like breakpoints, disassembly, and core file generation.
3. Tools like strace, ltrace, nm, objdump, and readelf for system call tracing, library call tracing, symbol tables, and object file analysis.
4. Techniques like LD_PRELOAD, ulimit, and perf for custom debugging and performance analysis.
This document provides an overview of using DTrace to instrument systems. It discusses what DTrace is and its uses for performance analysis, debugging, and finding out what is happening in software. It covers DTrace terminology like probes, actions, and predicates. The document provides examples of simple DTrace scripts for profiling system calls and measuring latency. It also discusses how Instruments on Mac OS X uses DTrace and provides an example Instruments file activity instrument.
This document summarizes a presentation about tuning parallel code on Solaris. It discusses:
1) Using tools like DTrace, prstat, and vmstat to analyze performance issues like thread scheduling and I/O problems in parallel applications on Solaris.
2) Two examples of using DTrace to analyze thread scheduling and troubleshoot I/O performance problems in a virtualized Windows server.
3) How the examples demonstrated using DTrace to identify unbalanced thread scheduling and discover that a domain controller was disabling disk write caching, slowing performance.
An in depth overview of the possibilities of SNMP. How to monitor your environment using SNMP.
Learn what you can do with SNMP and what SNMP can do for you within one hour. Most aspects of SNMP are addressed. Getting the information, setting values, but also how the information is presented and the difference between the OID and the MIBs.
In this presentation I’m trying to make SNMP “simple” again and understandable for everybody.
This document discusses footprinting and information gathering techniques for network security. It defines footprinting as gathering information about potential target systems and networks. Both attacker and defender perspectives are considered. Basic Linux and Windows tools are covered, such as hostname, ifconfig, who, ping, traceroute, dig, nslookup, whois, arp and netstat for gathering system, network topology and user information. Packet sniffers like Wireshark are also introduced for analyzing network traffic. The document emphasizes that even basic tools can provide a lot of useful information to attackers, so defenders should aim to minimize what they reveal.
Performance analysis and troubleshooting using DTraceGraeme Jenkinson
The document provides an overview of performance analysis tools like tracing and profiling. It discusses different tracing approaches like print statements, logging frameworks, and debuggers. It introduces DTrace as a dynamic instrumentation tool that allows tracing production systems with zero probe effect. A case study demonstrates using DTrace to analyze NFS latency issues. The document also discusses tracing tools for Linux like ftrace, perf, SystemTap, and eBPF.
ngrep is a network packet sniffer that allows filtering and matching regular expressions against TCP/IP and other protocols at the data link layer. It can be used to debug plaintext protocols, analyze anomalous network activity, and for security/hacking purposes. The document provides examples of ngrep commands and output, demonstrating how it can be used to inspect HTTP headers, filter traffic, and view output in both ASCII and hexadecimal formats.
This document provides an overview of using Wireshark and tcpdump to monitor network traffic. It begins with an introduction to the motivation for network monitoring. It then covers the tools tcpdump, tshark, and Wireshark. Examples are given of using tcpdump and tshark on the command line to capture traffic. The document demonstrates Wireshark's graphical user interface and features for analyzing captured packets, including display filters, following TCP streams, conversations, endpoint statistics, and flow graphs. It concludes with tips for improving Wireshark performance and using grep to further analyze saved packet files.
DTrace and SystemTap are dynamic tracing frameworks available for Solaris and Linux respectively. This session will give an overview of the static DTrace probes available in both Drizzle and MySQL and show numerous examples of scripts that utilize these probes. Mixing dynamic and static probes will also be discussed.
This document discusses threads and multithreading. It begins with an introduction to threads and their models, including user-level and kernel-level threads. It then covers multithreading approaches like thread-level parallelism and data-level parallelism. The document discusses context switching on single-core versus multicore systems. It also provides an example of implementing matrix multiplication using threads. Finally, it summarizes a case study on using threads in interactive systems.
The document provides instructions on how to configure an SSH server on Linux, perform footprinting and reconnaissance, scanning tools and techniques, enumeration tools and techniques, password cracking techniques and tools, privilege escalation methods, and keylogging and hidden file techniques. It discusses active and passive footprinting, Nmap port scanning, NetBIOS and SNMP enumeration, Windows password hashes, the sticky keys method for privilege escalation, ActualSpy keylogging software, and hiding files using NTFS alternate data streams. Countermeasures for many of these techniques are also outlined.
D Trace Support In My Sql Guide To Solving Reallife Performance ProblemsMySQLConference
DTrace is a dynamic tracing framework that can be used to identify performance problems in MySQL. It works by inserting probes into code locations and executing scripts when the probes fire. This allows tracking of events like SQL queries, table locks, and storage engine operations without restarting MySQL. The document provides examples of using static and dynamic probes to trace queries and identify hot database tables.
A brief talk on systems performance for the July 2013 meetup "A Midsummer Night's System", video: http://www.youtube.com/watch?v=P3SGzykDE4Q. This summarizes how systems performance has changed from the 1990's to today. This was the reason for writing a new book on systems performance, to provide a reference that is up to date, covering new tools, technologies, and methodologies.
The document discusses the key components and functions of the Unix system kernel. It describes the kernel as managing system resources like CPUs, memory and I/O devices. The major components are the process control subsystem, file subsystem, and hardware control. The kernel handles process management, device management, file management and provides services like virtual memory and networking. It uses a scheduler to allocate CPU time to processes based on their state and priority level.
Capturing NIC and Kernel TX and RX Timestamps for Packets in GoScyllaDB
Go gives us net.Dial and net.Listen for sending and receiving data at Layer 4. Now you will see how to send and receive raw packets directly to and from the NIC at Layer 1 to get timestamp information from timestamping-enabled NICs and when packets enter and leave the Linux kernel. Capturing these timestamps allows us to get better granularity when measuring latency and jitter instead of relying on time.Now() in userspace where that is subject to additional time introduced by the OS and Go runtime schedulers.
Similar to A22 Introduction to DTrace by Kyle Hailey (20)
Some might think Docker is for developers only, but this is not really the case.Docker is here to stay and we will only see more of it in the future.
In this session learn what Docker is and how it works.This session will be covering core areas such as volumes, but also stepping it up to a few tips and tricks to help you get the most out of your Docker environment.The session will dive into a few examples of how to create a database environment within just a few minutes - perfect for testing,development, and possibly even production systems.
Machine Learning explained with Examples
Everybody is talking about machine learning. What is it actually and how can I use it?
In this presentation we will see some examples of solving real life use cases using machine learning. We will define Tasks and see how that task can be addressed using machine learning.
SQL Server 2017でLinuxに対応し、その延長線でDocker対応やKubernetesによる可用性構成が組めるようになりました。そしてリリースを間近に控えたSQL Server 2019ではKubernetesを活用したBig Data Cluster機能の提供が予定されており、コンテナの活用範囲はさらに広がっています。
本セッションではこれからSQL Serverコンテナに触れていくための基礎知識と実際に触れてみるための手順やサンプルをお届けします。
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
2. Agenda
1. Intro … Me … Delphix
2. What is DTrace
3. Why DTrace
– Make the Impossible be possible
– Low overhead
4. Where DTrace can be used
5. How DTrace is used
– Probes
– Overhead
– Variables
– Resources
3. Kyle Hailey
• OEM 10g Performance Monitoring
• Visual SQL Tuning (VST) in DB Optimizer
• Delphix
5. What is DTrace
• Way of tracing O/S and Programs
– Making the impossible possible
• Your code unchanged
– Optional add static DTrace probes
• No overhead when off
– Turning on dynamically changes code path
• Low overhead when on
– 1000s of events per second cause less 1% overhead
• Event Driven
– Like event 10046, 10053
7. Where can we trace
• Solaris
• OpenSolaris
• FreeBSD …
• MacOS
• Linux – announced from Oracle
• AIX – working “probevue”
8. What can we trace?
Almost anything
– All system calls “read”
– All kernel calls “biodone”
– All function calls in a program
– All DTrace stable providers
• Example : io:::start
• Predefined stable probes
• Non-stable Probe names and arguments can change
over time
– Custom probes
• Write custom probes in programs to trace
10. Event Driven
• DTrace Code run when probes fire in OS
/usr/sbin/dtrace -n '
Probe (multi-threaded, process)
#pragma D option quiet when this happens then:
io:::start
{
printf(" timestamp %d ¥n",timestamp);
}' Take action
• Program runs until canceled Print variable
$ sudo ./mydtrace.d
timestamp 8135515300287183
timestamp 8135515300328512
timestamp 8135515300346769
^C
11. What are these
What are these probes and variables:?
io:::start
Probe
{
printf(" timestamp %d ¥n",timestamp);
Variable
}'
– Probes
• kernel and system calls
• program function calls
• predefined by DTrace
– Variables
• Variables are either predefined in DTrace like timestamp
• defined by user
12. How to list Probes?
Two ways to list probes
1. All System and kernel calls
dtrace –l
2. All Process functions
dtrace –l pid[pid]
Output will have 4 part name, colon separated
Provider:module:function:name
13. Kernel vs User Space
Kernel Functions
dtrace –l
$ dtrace –l
dtrace –l System Calls
899
731 21
User Land
$ dtrace –l pid21
User Processes
14. dtrace -l
Provider Module Function Name
$ sudo dtrace –l
ID PROVIDER MODULE FUNCTION NAME
1 dtrace BEGIN
2 dtrace END
3 dtrace ERROR
16 profile tick-1sec
17 fbt klmops lm_find_sysid entry
18 fbt klmops lm_find_sysid return
19 fbt klmops gister_share_locally entry
…
Thousands of lines .
16. Providers:defined interfaces
Instead of tracing a kernel function, which could change between O/S
versions, trace a maintained, stable probe
https://wikis.oracle.com/display/DTrace/Providers
– I/O io Provider
– CPU sched Provider
– system calls syscall Provider
– memory vminfo Provider
– user processes pid Provider
– network tcp Provider
Provider definition files in /usr/lib/dtrace, such as io.d, nfs.d, sched.d, tcp.d
17. Example Network: TCP
What if we wanted to look for TCP transmissions for receive ?
Probes have 4 part name
Provider:module:function:name
$ dtrace –l | grep tcp | grep receive
tcp:ip:tcp_input_data:receive
Or look at wiki
https://wikis.oracle.com/display/DTrace/tcp+Provider
18. Probe arguments: dtrace –lnv
What are the arguments for the probe function
“tcp:ip:tcp_input_data:receive”
$ dtrace -lvn tcp:ip:tcp_input_data:receive
ID PROVIDER MODULE FUNCTION NAME
7301 tcp ip tcp_input_data receive
Argument Types
args[0]: pktinfo_t *
args[1]: csinfo_t *
args[2]: ipinfo_t *
args[3]: tcpsinfo_t *
args[4]: tcpinfo_t *
What is “tcpsinfo_t ” for example ?
19. Probe Argument definitions
Find out what “tcpsinfo_t ” is
Two ways:
1. Stable Provider
– https://wikis.oracle.com/display/DTrace/Providers
– In our case there is a TCP stable provider
https://wikis.oracle.com/display/DTrace/tcp+Provider
2. Look at source code
– For OpenSolaris see: http://scr.illumos.org
– Otherwise get a copy of the source
• Load into Eclipse or similar for easy search
Let’s look up “tcpsinfo_t ”
21. src.illumos.org
tcpsinfo_t - points to many things
example
string tcps_raddr = Remote machines IP address
22. Creating a Program
• Find out all the machines we are receiving TCP packets from
$ cat tcpreceive.d
#!/usr/sbin/dtrace -s
#pragma D option quiet
probe tcp:ip:tcp_input_data:receive
action { printf(" address %s ¥n", args[3]->tcps_raddr ); }
args[3]: tcpsinfo_t *
$ sudo ./tcpreceive.d
address 127.0.0.1
address 172.16.103.58 When TCP receive
address 127.0.0.1 Print remote address
address 172.16.100.187
address 172.16.103.58
address 127.0.0.1
^C
23. Using for TCP Window sizes
ip usend ssz send recd
172.16.103.58 564 16028 564 ¥
172.16.103.58 696 16208 132 ¥
172.16.103.58 1180 16208 484 ¥
172.16.103.58 1664 16208 484 ¥
172.16.103.58 2148 16208 484 ¥
172.16.103.58 2148 16208 / 0
172.16.103.58 1452 16208 / 0
Remote Unacknowledged Send Receive
Machine Bytes Sent Bytes Bytes
Send Window
Bytes
If unacknowleged bytes sent goes above send window
then transmissions will be delayed
24. Review so far
• DTrace – trace O/S and user programs
• Solaris and partially on Linux among others
• Code is event driven, structure
– probe
– Include optional filter
– Action
• Get all event’s with “dtrace –l”
• Get event arguments with “dtrace –lnv probe”
• Get argument definitions in source or wiki
25. Variables
1. Globals
• Not thread save
X=1;
A[1]=1;
2. Aggregates
• Thread safe scalars and arrays
• Special operations, Count, average, quantize
@ct = count() ;
@sm = sum(value);
@sm[type]=sum(value);
@agg = quantize(value);
3. Self-> var
• Thread variable, self->x = value;
4. This->var
• Light weight variable for only this probe firing
• this->x = value;
27. What is an aggregate?
• Multi CPU safe variable
• Light weight
• Array or scalar
• Denoted by @
– @var= function(value);
– @var[array_indice]=function(value);
• Functions pre-defined only, such as
– sum()
– count()
– max()
– quantize()***
• Print out with “printa”
28. Using Aggregates: count()
What program writes the most often?
syscall::write:entry {
@counts[execname] = count();
}
expr 72
sh 291
tee 814
make.bin 2010
execname = session Count of occurrences doing writes
https://wikis.oracle.com/display/DTrace/Aggregations
29. Aggregate: quantize()
Get distribution of all I/O sizes
If the following returns too many rows
$ sudo dtrace -l | grep io
Alternately Limit output to specific probes with “-ln” flag:
$ sudo dtrace -ln io:::
ID PROVIDER MODULE FUNCTION NAME
6281 io genunix biodone done
6282 io genunix biowait wait-done
6283 io genunix biowait wait-start
7868 io nfs nfs_bio done
7871 io nfs nfs_bio start
30. Aggregate : quantize()
What if we wanted a distribution of all I/O sizes?
bio = block I/O
$ sudo dtrace -ln io:::
ID PROVIDER MODULE FUNCTION NAME
6281 io genunix biodone done
6282 io genunix biowait wait-done
6283 io genunix biowait wait-start
7868 io nfs nfs_bio done NFS
7871 io nfs nfs_bio start module
$ sudo dtrace -lvn io:genunix:biodone:done
ID PROVIDER MODULE FUNCTION NAME What is
6281 io genunix biodone done bufinfo_t?
Argument Types
args[0]: bufinfo_t * Sounds like
args[1]: devinfo_t * Buffer
args[2]: fileinfo_t information
34. Aggregate : iosizes.d with execname
Kernel land I/O
#!/usr/sbin/dtrace -s
#pragma D option quiet
io:::done
{ @sizes[execname] = quantize(args[0]->b_bcount); }
Size of the
I/O
$ sudo iosizes.d
sched
value --- Distribution -- count
256 | 0
512 |@@@@ 6
Only returns
1024 |@@@@ 6
I/O for sched
2048 |@@@@@@@@@@@@@@@@@@ 31
4096 |@@@ 5
Why?
8192 |@@@@@ 9
16384 |@@@@ 6
32768 | 0
^C
35. Kernel vs User Space
• I/O is done by the kernel so only see “sched”
• User I/O is done via a system call to kernel
I/O is in
Kernel Functions kernel
dtrace –l
done by
sched
dtrace –l System Calls
User
programs
899 make a
731 21
User Land system
call “read”
36. io:::start : kernel, look for user syscall
• Look for the read system call
$ sudo dtrace -l | grep syscall | grep read
5425 syscall read entry
5426 syscall read return
$ sudo dtrace -lvn syscall::read:entry
ID PROVIDER MODULE FUNCTION NAME
5425 syscall read entry
Argument Types
None
37. User program system call “read”
Arg0 = fd
Arg1 = *buf
Arg2 = size
Instead of
args[2]->size
Use
arg2
$ sudo dtrace -lvn syscall::read:entry
Argument Types
None
38. Aggregate Example: readsizes.d
User land I/O
#!/usr/sbin/dtrace -s
#pragma D option quiet
syscall::read:entry
{ @read_sizes[execname] = quantize(arg2); }
Size of the
I/O
java
value ------------- Distribution ------------- count
4096 | 0
8192 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
16384 | 0
cat
value ------------- Distribution ------------- count
16384 | 0
32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
65536 | 0
sshd
value ------------- Distribution ------------- count
8192 | 0
16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 931
32768 | 0
39. Built in variables
• pid – process id
• tid – thread id
• execname
• timestamp – nano-seconds
• cwd – current working directory
• Probes:
– probeprov
– probemod
– probefunc
– probename
40. Built in variable examples
No function name =
Wild card, all matches Program name
# cat exec.d Function executing
#!/usr/sbin/dtrace -s Records function
That fires
syscall:::entry
{ @num[execname, probefunc] = count(); }
dtrace:::END
{ printa(" %-32s %-32s %@8d¥n", @num);}
# ./syscall.d
dtrace: script './exec.d' matched 236 probes
sleep stat64 32
vmtoolsd pollsys 37
java pollsys 72
java lwp_cond_wait 180
Execname function count
41. Latency
Latency crucial to performance analysis.
Latency = delta = end_time – start_time
Dtrace probes have
• Entry, exit
• Start , done
Take time at beginning and time at end and take
42. Latency: how long does I/O take?
Latency = delta = end_time – start_time
– start_time io:::start
– end_time io:::done
Array to hold each I/O start time:
• Array needs a unique key for each I/O
• Key could be based on
– device = args[0]->b_edev Look these up in source
– block = args[0]->b_blkno
Array: tm_start[device,block]=timestamp
44. Other ways of keying start/end
1. We used a global array
– tm_start[device,block]=timestamp
– Probably best general way
2. Some people use arg0
– tm_start[arg0]=timestamp
– Not as clear that this is valid
3. Others use
– self->start = timestamp;
– This only works if the same thread that does the begin
probe is the same the does the end probe
• Doesn’t work for io:::start , io:::done
• Does work for nfs:::start , nfs:::done
45. Tracing vs Profiling
Tracing
• Programs run until ^C
• Can print every probe
• At ^C all unprinted variables are printed
Profiling
• Take action every X seconds
• Special probe name
profile:::tick-1sec
Can profile at hz or ns, us, ms, sec
profile:::tick-1 Hz
profile:::tick-1ms ms
46. Latency: output every second
#!/usr/sbin/dtrace -s
#pragma D option quiet
io:::start
start /* device block number */
{ tm_start[ args[0]->b_edev, args[0]->b_blkno] = timestamp; }
io:::done
/ tm_start[ args[0]->b_edev, args[0]->b_blkno] /
{
end this->delta =
(timestamp - tm_start[args[0]->b_edev,args[0]->b_blkno] );
@io = quantize(this->delta);
tm_start[ args[0]->b_edev, args[0]->b_blkno] = 0;
}
Every profile:::tick-1sec
{ printa(@io);
second trunc(@io);
}
clear print quantize clear
47. User Process Tracing
Kernel Functions
dtrace –l
dtrace –l System Calls
899
731 21
User Land
User Processes
$ dtrace –l pid21
48. Tracing User Processes
• What can you trace in Oracle
– $ ps –ef | grep oracle
– Get a process id
– $ dtrace –l pid[process_id]
– Lists program functions
• What do these functions do?
– Source code for Mysql
– Guess if you are on Oracle
– Some good blogs out there
49. Overhead
User process tracing (from Brendan Gregg )
• Don't worry too much about pid provider probe cost at < 1000 events/sec.
• At > 10,000 events/sec, pid provider probe cost will be noticeable.
• At > 100,000 events/sec, pid provider probe cost may be painful.
User process probes 2-15us typical, could be slower
Kernel and system calls are cheaper to trace
• > 1,000,000 20% impact
For non CPU work loads impact may be greater
• TCP tests showed 50% throughput drop at 160K events/sec
– 40K interupts/sec
50. Formatting data
Problem : Formating data difficult in Dtrace
DTrace has printf and printa (for arrays) but …
• No floating point
• No “if-then-else” , no “for-loop”
– type = probename == "op-write-done" ? "W" : "R";
• No way to access index of an aggregate array (ex sum of
time by sum of counts)
Solution: do formatting and calculations in perl
dtrace -n ‘ … ‘ | perl –e ‘ … ‘
51. Summary
• Stucture
#!/usr/sbin/dtrace -s
Name_of_something_to_trace
/ filters /
{ actions }
• List of Probes
dtrace -l
• Arguments to probes
dtrace –lnv prov:mod:func:name
• Look up args in source code http://scr.illumos.org
• Use Aggregates @ – they make DTrace easy
• Google Dtrace
– Find example programs