- The document discusses various Linux system log files such as /var/log/messages, /var/log/secure, and /var/log/cron and provides examples of log entries.
- It also covers log rotation tools like logrotate and logwatch that are used to manage log files.
- Networking topics like IP addressing, subnet masking, routing, ARP, and tcpdump for packet sniffing are explained along with examples.
Kernel Recipes 2017 - Modern Key Management with GPG - Werner Koch
Although GnuPG 2 has been around for nearly 15 years, the old 1.4 version was still in wide use. With Debian and others making 2.1 the default, many interesting things can now be done. In this talk he will explain the advantages of modern key algorithms, like ed25519, and why gpg relaxed some of its more paranoid defaults. The new –quick commands of gpg for easily scriptable key management will be described as well as the new key discovery methods. Finally hints for integration of gpg into other programs will be given.
Werner Koch, g10code
Apresentação na Pós-Graduação em Segurança da Informação:
- Sniffer de senhas em plain text;
- Ataque de brute-force no SSH;
- Proteção: Firewall, IPS e/ou TCP Wrappers;
- Segurança básica no sshd_config;
- Chaves RSA/DSA para acesso remoto;
- SSH buscando chaves no LDAP;
- Porque previnir o acesso: Fork Bomb
The document describes setting up Hadoop in pseudo-distributed mode on a CentOS virtual machine instance. It details steps like creating a user account, installing Java and Hadoop, formatting the namenode, starting HDFS and YARN daemons, creating HDFS directories, and running a sample Pi estimation MapReduce job.
The document summarizes an analysis of compromised Linux servers. The author detected intrusion after logging in and seeing a previous login from an Italian IP address. Further investigation revealed unauthorized login attempts from other countries. Logs showed the intruder accessed the servers repeatedly over weeks. Processes and open ports indicated the presence of rootkits and backdoors. User accounts for the intruder were also found on the servers.
This document discusses the crash reporting mechanism in Tizen. It describes the crash client, which handles crash signals and generates crash reports. It covers Samsung's crash-work-sdk and Intel's corewatcher crash clients. It also discusses the crash server that receives reports and the CrashDB web interface. Finally, it mentions crash reason location algorithms.
This document provides a summary of basic Linux commands including:
- ls lists files and directories
- cp copies files and directories
- mv moves or renames files and directories
- rm removes files or directories
- touch creates empty files
- cat outputs the contents of files
- mkdir creates directories
- grep searches for patterns in files
- ps displays currently running processes
- top displays active processes and system resources
This document describes the network configuration and management scripts used in Scientific Linux release 6.1. It discusses the main scripts and files used like /etc/rc.d/init.d/network, /etc/sysconfig/network, and /etc/sysconfig/network-scripts. It provides details on how the network script starts and stops network interfaces, and brings interfaces up at boot time using files in /etc/sysconfig/network-scripts. It also summarizes some of the functions available in the network-functions file.
The document provides instructions for deploying Red Hat OpenStack on Red Hat Enterprise Linux using PackStack. It describes starting two virtual machines for the RDO control and compute nodes. It then discusses using PackStack to install OpenStack on the nodes in an interactive or automated way. Finally, it outlines exploring the OpenStack dashboard and services like Keystone, Glance, Nova, Cinder, and Swift after installation.
Importance of sshfp and configuring sshfp for network devices
SSHFP records provide a secure method of distributing host public keys via DNS. The document discusses:
1) How SSHFP records store fingerprints of host public keys in DNS to validate connections, rather than distributing keys directly.
2) The process of generating fingerprints from router public keys, creating SSHFP records, and configuring DNS to distribute them securely via DNSSEC.
3) How an SSH client can validate connections to a host by looking up its SSHFP records and fingerprints in DNS, preventing man-in-the-middle attacks.
SSHFP records provide a secure method of distributing host public keys via DNS. The document discusses:
1) How SSHFP records store the fingerprint of a host's public key in DNS, allowing clients to validate the key via DNS lookup rather than trusting the host directly.
2) Instructions for generating SSHFP records for network devices that may not support all SSH commands, including extracting public keys and generating fingerprints.
3) Configuration details for distributing the SSHFP records in DNS and validating them during SSH connections using DNSSEC, avoiding the need to manually accept host keys.
Kernel Recipes 2017: Performance Analysis with BPF
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
The document discusses using proxy ARP to allow multiple containers and VMs to share a single network interface on the host machine. It notes some limitations of alternative approaches like Linux bridges, Open vSwitch, and MACVLAN. It also describes some issues with proxy ARP like stealing MAC addresses and requiring static routing. The proposed solution is to use arptables to selectively allow ARP requests from specific IP addresses to prevent MAC address conflicts while enabling network access for containers and VMs.
This document provides information on various debugging and profiling tools that can be used for Ruby including:
- lsof to list open files for a process
- strace to trace system calls and signals
- tcpdump to dump network traffic
- google perftools profiler for CPU profiling
- pprof to analyze profiling data
It also discusses how some of these tools have helped identify specific performance issues with Ruby like excessive calls to sigprocmask and memcpy calls slowing down EventMachine with threads.
Source Code of Building Linux IPv6 DNS Server (Complete Sourcecode)
This document discusses building a Linux IPv6 DNS server. It includes shell scripts and C source code files to perform tasks like checking system details, ensuring root user privileges, cleaning directories, building kernel images, backing up source directories, and adding members to a server via a GUI. The scripts and programs work together to configure, build, and run an IPv6-enabled Linux DNS server.
To know more, Register for Online Hadoop Training at WizIQ.
Click here : http://www.wiziq.com/course/21308-hadoop-big-data-training
A complete guide to Hadoop Installation that will help you when ever you face problems while installing Hadoop !!
Reverse engineering Swisscom's Centro Grande Modem
The document discusses reverse engineering the firmware of Swisscom's Centro Grande modems. It identifies several vulnerabilities found, including a command overflow issue that allows complete control of the device by exceeding the input buffer, and multiple buffer overflow issues that can be exploited to execute code remotely by crafting specially formatted XML files. Details are provided on the exploitation techniques and timeline of coordination with Swisscom to address the vulnerabilities.
Presented at LISA18: https://www.usenix.org/conference/lisa18/presentation/babrou
This is a technical dive into how we used eBPF to solve real-world issues uncovered during an innocent OS upgrade. We'll see how we debugged 10x CPU increase in Kafka after Debian upgrade and what lessons we learned. We'll get from high-level effects like increased CPU to flamegraphs showing us where the problem lies to tracing timers and functions calls in the Linux kernel.
The focus is on tools what operational engineers can use to debug performance issues in production. This particular issue happened at Cloudflare on a Kafka cluster doing 100Gbps of ingress and many multiple of that egress.
This document provides a summary of common Linux network tools including ifconfig, netstat, route, ping, traceroute, iptables, netcat, rinetd, tcpdump, and tcpreplay. It describes what each tool is used for at a high level, such as configuring network interfaces, displaying network status, manipulating network routes, testing network connectivity, implementing firewalls, and capturing/replaying network traffic. The document also provides basic introductions to IPv4 and IPv6 addressing and routing concepts.
Performance tweaks and tools for Linux (Joe Damato)
The document discusses various Linux performance analysis tools including lsof to list open files, strace to trace system calls, tcpdump to dump network traffic, perftools from Google for profiling CPU usage, and a Ruby library called perftools.rb for profiling Ruby code. Examples are provided for using these tools to analyze memory usage, slow queries, Ruby interpreter signals, thread scheduling overhead, and identifying hot spots in Ruby web applications.
A low-interaction honeypot was deployed in multiple cloud environments. Various malware samples were captured, including Conficker and other viruses. Analysis of IP addresses and packet captures revealed attempts to exploit Microsoft SQL Server, Windows shares, and RDP ports. The diverse environments allowed collection of malware from around the world.
The document discusses hacking the Swisscom modem by exploiting default credentials to gain access. Upon login, the author runs commands to investigate the system such as viewing configuration files and mapping the internal network. Various system details are discovered including the Linux kernel version and software components.
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoring
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
One of the great challenges of of monitoring any large cluster is how much data to collect and how often to collect it. Those responsible for managing the cloud infrastructure want to see everything collected centrally which places limits on how much and how often. Developers on the other hand want to see as much detail as they can at as high a frequency as reasonable without impacting the overall cloud performance.
To address what seems to be conflicting requirements, we've chosen a hybrid model at HP. Like many others, we have a centralized monitoring system that records a set of key system metrics for all servers at the granularity of 1 minute, but at the same time we do fine-grained local monitoring on each server of hundreds of metrics every second so when there are problems that need more details than are available centrally, one can go to the servers in question to see exactly what was going on at any specific time.
The tool of choice for this fine-grained monitoring is the open source tool collectl, which additionally has an extensible api. It is through this api that we've developed a swift monitoring capability to not only capture the number of gets, put, etc every second, but using collectl's colmux utility, we can also display these in a top-like formact to see exactly what all the object and/or proxy servers are doing in real-time.
We've also developer a second cability that allows one to see what the Virtual Machines are doing on each compute node in terms of CPU, disk and network traffic. This data can also be displayed in real-time with colmux.
This talk will briefly introduce the audience to collectl's capabilities but more importantly show how it's used to augment any existing centralized monitoring infrastructure.
Speakers
Mark Seger
AWS re:Invent 2016: Making Every Packet Count (NET404)
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
This document provides an overview of common Linux networking commands including ifconfig, route, ping, traceroute, nslookup, arp, dig, netstat, and dmesg. It describes what each command is used for and provides examples of basic syntax and usage. Key points covered include using ifconfig to configure network interfaces, route to view and manage routing tables, and netstat to view network connections and traffic.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
This document provides an overview of Linux performance monitoring tools including mpstat, top, htop, vmstat, iostat, free, strace, and tcpdump. It discusses what each tool measures and how to use it to observe system performance and diagnose issues. The tools presented provide visibility into CPU usage, memory usage, disk I/O, network traffic, and system call activity which are essential for understanding workload performance on Linux systems.
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
Using Libtracecmd to Analyze Your Latency and Performance Troubles
Trying to figure out why your application is responding late can be difficult, especially if it is because of interference from the operating system. This talk will briefly go over how to write a C program that can analyze what in the Linux system is interfering with your application. It will use trace-cmd to enable kernel trace events as well as tracing lock functions, and it will then go over a quick tutorial on how to use libtracecmd to read the created trace.dat file to uncover what is the cause of interference to you application.
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
The document discusses SUID, SGID, and sticky bits in Linux file permissions. It provides examples of setting these permissions on files and directories and observing the effects. Specifically, it shows:
- Setting the SUID bit on /usr/bin/su allows non-root users to run su with root privileges
- Setting the SGID bit on a directory gives create and write permissions to all members of the directory's group
- Setting the sticky bit on a directory prevents deletion of files within it by non-owners
SDN is an emerging network architecture that decouples the network control plane from the forwarding plane. The control plane programmatically configures the forwarding plane using protocols like OpenFlow. This allows for centralized control and programmability of the network. OpenDaylight is an open source SDN controller that programs OpenFlow switches using its northbound REST APIs. It is implemented using OSGi for modularity.
Kernel Recipes 2017 - Modern Key Management with GPG - Werner KochAnne Nicolas
Although GnuPG 2 has been around for nearly 15 years, the old 1.4 version was still in wide use. With Debian and others making 2.1 the default, many interesting things can now be done. In this talk he will explain the advantages of modern key algorithms, like ed25519, and why gpg relaxed some of its more paranoid defaults. The new –quick commands of gpg for easily scriptable key management will be described as well as the new key discovery methods. Finally hints for integration of gpg into other programs will be given.
Werner Koch, g10code
Apresentação na Pós-Graduação em Segurança da Informação:
- Sniffer de senhas em plain text;
- Ataque de brute-force no SSH;
- Proteção: Firewall, IPS e/ou TCP Wrappers;
- Segurança básica no sshd_config;
- Chaves RSA/DSA para acesso remoto;
- SSH buscando chaves no LDAP;
- Porque previnir o acesso: Fork Bomb
The document describes setting up Hadoop in pseudo-distributed mode on a CentOS virtual machine instance. It details steps like creating a user account, installing Java and Hadoop, formatting the namenode, starting HDFS and YARN daemons, creating HDFS directories, and running a sample Pi estimation MapReduce job.
The document summarizes an analysis of compromised Linux servers. The author detected intrusion after logging in and seeing a previous login from an Italian IP address. Further investigation revealed unauthorized login attempts from other countries. Logs showed the intruder accessed the servers repeatedly over weeks. Processes and open ports indicated the presence of rootkits and backdoors. User accounts for the intruder were also found on the servers.
This document discusses the crash reporting mechanism in Tizen. It describes the crash client, which handles crash signals and generates crash reports. It covers Samsung's crash-work-sdk and Intel's corewatcher crash clients. It also discusses the crash server that receives reports and the CrashDB web interface. Finally, it mentions crash reason location algorithms.
This document provides a summary of basic Linux commands including:
- ls lists files and directories
- cp copies files and directories
- mv moves or renames files and directories
- rm removes files or directories
- touch creates empty files
- cat outputs the contents of files
- mkdir creates directories
- grep searches for patterns in files
- ps displays currently running processes
- top displays active processes and system resources
This document describes the network configuration and management scripts used in Scientific Linux release 6.1. It discusses the main scripts and files used like /etc/rc.d/init.d/network, /etc/sysconfig/network, and /etc/sysconfig/network-scripts. It provides details on how the network script starts and stops network interfaces, and brings interfaces up at boot time using files in /etc/sysconfig/network-scripts. It also summarizes some of the functions available in the network-functions file.
The document provides instructions for deploying Red Hat OpenStack on Red Hat Enterprise Linux using PackStack. It describes starting two virtual machines for the RDO control and compute nodes. It then discusses using PackStack to install OpenStack on the nodes in an interactive or automated way. Finally, it outlines exploring the OpenStack dashboard and services like Keystone, Glance, Nova, Cinder, and Swift after installation.
SSHFP records provide a secure method of distributing host public keys via DNS. The document discusses:
1) How SSHFP records store fingerprints of host public keys in DNS to validate connections, rather than distributing keys directly.
2) The process of generating fingerprints from router public keys, creating SSHFP records, and configuring DNS to distribute them securely via DNSSEC.
3) How an SSH client can validate connections to a host by looking up its SSHFP records and fingerprints in DNS, preventing man-in-the-middle attacks.
SSHFP records provide a secure method of distributing host public keys via DNS. The document discusses:
1) How SSHFP records store the fingerprint of a host's public key in DNS, allowing clients to validate the key via DNS lookup rather than trusting the host directly.
2) Instructions for generating SSHFP records for network devices that may not support all SSH commands, including extracting public keys and generating fingerprints.
3) Configuration details for distributing the SSHFP records in DNS and validating them during SSH connections using DNSSEC, avoiding the need to manually accept host keys.
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
The document discusses using proxy ARP to allow multiple containers and VMs to share a single network interface on the host machine. It notes some limitations of alternative approaches like Linux bridges, Open vSwitch, and MACVLAN. It also describes some issues with proxy ARP like stealing MAC addresses and requiring static routing. The proposed solution is to use arptables to selectively allow ARP requests from specific IP addresses to prevent MAC address conflicts while enabling network access for containers and VMs.
This document provides information on various debugging and profiling tools that can be used for Ruby including:
- lsof to list open files for a process
- strace to trace system calls and signals
- tcpdump to dump network traffic
- google perftools profiler for CPU profiling
- pprof to analyze profiling data
It also discusses how some of these tools have helped identify specific performance issues with Ruby like excessive calls to sigprocmask and memcpy calls slowing down EventMachine with threads.
Source Code of Building Linux IPv6 DNS Server (Complete Sourcecode)Hari
This document discusses building a Linux IPv6 DNS server. It includes shell scripts and C source code files to perform tasks like checking system details, ensuring root user privileges, cleaning directories, building kernel images, backing up source directories, and adding members to a server via a GUI. The scripts and programs work together to configure, build, and run an IPv6-enabled Linux DNS server.
To know more, Register for Online Hadoop Training at WizIQ.
Click here : http://www.wiziq.com/course/21308-hadoop-big-data-training
A complete guide to Hadoop Installation that will help you when ever you face problems while installing Hadoop !!
The document discusses reverse engineering the firmware of Swisscom's Centro Grande modems. It identifies several vulnerabilities found, including a command overflow issue that allows complete control of the device by exceeding the input buffer, and multiple buffer overflow issues that can be exploited to execute code remotely by crafting specially formatted XML files. Details are provided on the exploitation techniques and timeline of coordination with Swisscom to address the vulnerabilities.
Presented at LISA18: https://www.usenix.org/conference/lisa18/presentation/babrou
This is a technical dive into how we used eBPF to solve real-world issues uncovered during an innocent OS upgrade. We'll see how we debugged 10x CPU increase in Kafka after Debian upgrade and what lessons we learned. We'll get from high-level effects like increased CPU to flamegraphs showing us where the problem lies to tracing timers and functions calls in the Linux kernel.
The focus is on tools what operational engineers can use to debug performance issues in production. This particular issue happened at Cloudflare on a Kafka cluster doing 100Gbps of ingress and many multiple of that egress.
This document provides a summary of common Linux network tools including ifconfig, netstat, route, ping, traceroute, iptables, netcat, rinetd, tcpdump, and tcpreplay. It describes what each tool is used for at a high level, such as configuring network interfaces, displaying network status, manipulating network routes, testing network connectivity, implementing firewalls, and capturing/replaying network traffic. The document also provides basic introductions to IPv4 and IPv6 addressing and routing concepts.
Performance tweaks and tools for Linux (Joe Damato)Ontico
The document discusses various Linux performance analysis tools including lsof to list open files, strace to trace system calls, tcpdump to dump network traffic, perftools from Google for profiling CPU usage, and a Ruby library called perftools.rb for profiling Ruby code. Examples are provided for using these tools to analyze memory usage, slow queries, Ruby interpreter signals, thread scheduling overhead, and identifying hot spots in Ruby web applications.
Honeypots - November 8th Misec presentationTazdrumm3r
A low-interaction honeypot was deployed in multiple cloud environments. Various malware samples were captured, including Conficker and other viruses. Analysis of IP addresses and packet captures revealed attempts to exploit Microsoft SQL Server, Windows shares, and RDP ports. The diverse environments allowed collection of malware from around the world.
The document discusses hacking the Swisscom modem by exploiting default credentials to gain access. Upon login, the author runs commands to investigate the system such as viewing configuration files and mapping the internal network. Various system details are discovered including the Linux kernel version and software components.
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
One of the great challenges of of monitoring any large cluster is how much data to collect and how often to collect it. Those responsible for managing the cloud infrastructure want to see everything collected centrally which places limits on how much and how often. Developers on the other hand want to see as much detail as they can at as high a frequency as reasonable without impacting the overall cloud performance.
To address what seems to be conflicting requirements, we've chosen a hybrid model at HP. Like many others, we have a centralized monitoring system that records a set of key system metrics for all servers at the granularity of 1 minute, but at the same time we do fine-grained local monitoring on each server of hundreds of metrics every second so when there are problems that need more details than are available centrally, one can go to the servers in question to see exactly what was going on at any specific time.
The tool of choice for this fine-grained monitoring is the open source tool collectl, which additionally has an extensible api. It is through this api that we've developed a swift monitoring capability to not only capture the number of gets, put, etc every second, but using collectl's colmux utility, we can also display these in a top-like formact to see exactly what all the object and/or proxy servers are doing in real-time.
We've also developer a second cability that allows one to see what the Virtual Machines are doing on each compute node in terms of CPU, disk and network traffic. This data can also be displayed in real-time with colmux.
This talk will briefly introduce the audience to collectl's capabilities but more importantly show how it's used to augment any existing centralized monitoring infrastructure.
Speakers
Mark Seger
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
This document provides an overview of common Linux networking commands including ifconfig, route, ping, traceroute, nslookup, arp, dig, netstat, and dmesg. It describes what each command is used for and provides examples of basic syntax and usage. Key points covered include using ifconfig to configure network interfaces, route to view and manage routing tables, and netstat to view network connections and traffic.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
This document provides an overview of Linux performance monitoring tools including mpstat, top, htop, vmstat, iostat, free, strace, and tcpdump. It discusses what each tool measures and how to use it to observe system performance and diagnose issues. The tools presented provide visibility into CPU usage, memory usage, disk I/O, network traffic, and system call activity which are essential for understanding workload performance on Linux systems.
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
Using Libtracecmd to Analyze Your Latency and Performance TroublesScyllaDB
Trying to figure out why your application is responding late can be difficult, especially if it is because of interference from the operating system. This talk will briefly go over how to write a C program that can analyze what in the Linux system is interfering with your application. It will use trace-cmd to enable kernel trace events as well as tracing lock functions, and it will then go over a quick tutorial on how to use libtracecmd to read the created trace.dat file to uncover what is the cause of interference to you application.
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
Palestra realizada por Toronto Garcez aka torontux durante a 3a. edição da Nullbyte Security Conference em 26 de novembro de 2016.
Resumo:
O objetivo da apresentação é demonstrar de forma prática, o passo-a-passo para criar uma botnet com roteadores wi-fi e/ou embarcados em geral. Será demonstrado o desenvolvimento de um comando e controle e a utilização de firmwares "backdorados" para tornar dispositivos em bots.
The document summarizes Maycon Vitali's presentation on hacking embedded devices. It includes an agenda covering extracting firmware from devices using tools like BusPirate and flashrom, decompressing firmware to view file systems and binaries, emulating binaries using QEMU, reverse engineering code to find vulnerabilities, and details four vulnerabilities discovered in Ubiquiti networking devices designated as CVEs. The presentation aims to demonstrate common weaknesses in embedded device security and how tools can be used to analyze and hack these ubiquitous connected systems.
The document discusses diagnosing and mitigating MySQL performance issues. It describes using various operating system monitoring tools like vmstat, iostat, and top to analyze CPU, memory, disk, and network utilization. It also discusses using MySQL-specific tools like the MySQL command line, mysqladmin, mysqlbinlog, and external tools to diagnose issues like high load, I/O wait, or slow queries by examining metrics like queries, connections, storage engine statistics, and InnoDB logs and data written. The agenda covers identifying system and MySQL-specific bottlenecks by verifying OS metrics and running diagnostics on the database, storage engines, configuration, and queries.
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Ontico
Запускаем сервер (БД, Web-сервер или что-то свое собственное) и не получаем желаемый RPS. Запускаем top и видим, что 100% выедается CPU. Что дальше, на что расходуется процессорное время? Можно ли подкрутить какие-то ручки, чтобы улучшить производительность? А если параметр CPU не высокий, то куда смотреть дальше?
Мы рассмотрим несколько сценариев проблем производительности, рассмотрим доступные инструменты анализа производительности и разберемся в методологии оптимизации производительности Linux, ответим на вопрос за какие ручки и как крутить.
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
30. • # vim /etc/logwatch/conf/logwatch.conf
# stdout mail file
Output = mail
# Html
Format = text
# email
MailTo = root
MailFrom = Logwatch
# log
Range = yesterday
# log level Low, Med, High
Detail = Low
# /usr/share/logwatch/default.conf/services
Service = All
31. •
# logwatch -‐-‐detail Low -‐-‐output stdout -‐-‐service
all -‐-‐range today
•
# logwatch -‐-‐detail Low -‐-‐output mail -‐-‐mailto
sntc06@gmail.com -‐-‐service all -‐-‐range yesterday
36. •
•
$ free -‐h
total used free shared buffers cached
Mem: 7.8G 7.6G 193M 42M 111M 3.3G
-‐/+ buffers/cache: 4.2G 3.6G
2.0G 38M 2.0G
# vmstat -‐S MB
procs -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐memory-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐swap-‐-‐ -‐-‐-‐-‐-‐io-‐-‐-‐-‐ -‐system-‐-‐ -‐-‐-‐-‐-‐-‐cpu-‐-‐-‐-‐-‐
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 38 191 116 3395 0 0 8 6 15 1 2 0 98 0 0
37. •
• $ netstat
• -‐n IP
• -‐a socket ( )
• -‐p port root
• -‐r