This is the second session of the learning pathway at PASS Summit 2019, which is still a stand alone session to teach you how to write proper Linux BASH scripts
2017 OWASP SanFran March Meetup - Hacking SQL Server on Scale with PowerShellScott Sutherland
This presentation will provide an overview of common SQL Server discovery, privilege escalation, persistence, and data targeting techniques. Techniques will be shared for escalating privileges on SQL Server and associated Active Directory domains. Finally I’ll show how PowerShell automation can be used to execute the SQL Server attacks on scale with PowerUpSQL. All scripts demonstrated during the presentation are available on GitHub. This should be useful to penetration testers and system administrators trying to gain a better understanding of their SQL Server attack surface and how it can be exploited.
Sections Updated for OWASP Meeting:
- SQL Server Link Crawling
- UNC path injection targets
- Command execution details
Melbourne Chef Meetup: Automating Azure Compliance with InSpecMatt Ray
June 26, 2017 presentation. With the move to infrastructure as code and continuous integration/continuous delivery pipelines, it looked like releases would become more frequent and less problematic. Then the auditors showed up and made everyone stop what they were doing. How could this have been prevented? What if the audits were part of the process instead of a roadblock? What sort of visibility do we have into the state of our Azure infrastructure compliance? This talk will provide an overview of Chef's open-source InSpec project (https://inspec.io) and how you can build "Compliance as Code" into your Azure-based infrastructure.
Bare-metal performance for Big Data workloads on Docker containersBlueData, Inc.
In a benchmark study, Intel® compared the performance of Big Data workloads running on a bare-metal deployment versus running in Docker* containers with the BlueData® EPIC™ software platform.
This in-depth study shows that performance ratios for container-based Hadoop workloads on BlueData EPIC are equal to — and in some cases, better than — bare-metal Hadoop. For example, benchmark tests showed that the BlueData EPIC platform demonstrated an average 2.33% performance gain over bare metal, for a configuration with 50 Hadoop compute nodes and 10 terabytes (TB) of data. These performance results were achieved without any modifications to the Hadoop software.
This is a revolutionary milestone, and the result of an ongoing collaboration between Intel and BlueData software engineering teams.
This white paper describes the software and hardware configurations for the benchmark tests, as well as details of the performance benchmark process and results.
Tuning Apache Ambari performance for Big Data at scale with 3000 agentsDataWorks Summit
Apache Ambari manages Hadoop at large-scale and it becomes increasingly difficult for cluster admins to keep the machinery running smoothly as data grows and nodes scale from 30 to 3000 agents. To test at scale, Ambari has a Performance Stack that allows a VM to host as many as 50 Ambari Agents. The simulated stack and 50 Agents per VM can stress-test Ambari Server with the same load as a 3000 node cluster. This talk will cover how to tune the performance of Ambari and MySQL, and share performance benchmarks for features like deploy times, bulk operations, installation of bits, Rolling & Express Upgrade. Moreover, the speaker will show how to use Ambari Metrics System and Grafana to plot performance, detect anomalies, and pinpoint tips on how to improve performance for a more responsive experience. Lastly, the talk will discuss roadmap features in Ambari 3.0 for improving performance and scale.
End-to-end Troubleshooting Checklist for Microsoft SQL ServerKevin Kline
Learning how to detect, diagnose and resolve performance problems in SQL Server is tough. Often, years are spent learning how to use the tools and techniques that help you detect when a problem is occurring, diagnose the root-cause of the problem, and then resolve the problem.
In this session, attendees will see demonstrations of the tools and techniques which make difficult troubleshooting scenarios much faster and easier, including:
• XEvents, Profiler/Traces, and PerfMon
• Using Dynamic Management Views (DMVs)
• Advanced Diagnostics Using Wait Stats
• Reading SQL Server execution plan
Every DBA needs to know how to keep their SQL Server in tip-top condition, and you’ll need skills the covered in this session to do it.
This document discusses integrating Docker containers with YARN by introducing a Docker container runtime to the LinuxContainerExecutor in YARN. The DockerContainerRuntime allows YARN to leverage Docker for container lifecycle management and supports features like resource isolation, Linux capabilities, privileged containers, users, networking and images. It remains a work in progress to support additional features around networking, users and images fully.
The document discusses running Hadoop clusters in the cloud and the challenges that presents. It introduces CloudFarmer, a tool that allows defining roles for VMs and dynamically allocating VMs to roles. This allows building agile Hadoop clusters in the cloud that can adapt as needs change without static configurations. CloudFarmer provides a web UI to manage roles and hosts.
Extreme Availability using Oracle 12c Features: Your very last system shutdown?Toronto-Oracle-Users-Group
This document discusses various Oracle 12c features that can be used to achieve high availability and keep systems available even during planned and unplanned outages. It compares options for handling planned changes like hardware, OS, database upgrades including RAC, RAC One Node, and Data Guard. It also discusses disaster recovery options like storage mirroring, RAC extended clusters, Data Guard, and GoldenGate replication. New features in Oracle 12c like Far Sync instances and cascading standbys are also covered. The document provides a guide to deciphering the necessary components for high availability.
SUSE, Hadoop and Big Data Update. Stephen Mogg, SUSE UKhuguk
This session will give you an update on what SUSE is up to in the Big Data arena. We will take a brief look at SUSE Linux Enterprise Server and why it makes the perfect foundation for your Hadoop Deployment.
Presenter: Dean Richards of Confio Software
If you're a developer or DBA, this presentation will outline a method for determining the best execution plan for a query every time by utilizing SQL Diagramming techniques.
Whether you're a beginner or expert, this approach will save you countless hours tuning a query.
You Will Learn:
* SQL Tuning Methodology
* Response Time Tuning Practices
* How to use SQL Diagramming techniques to tune SQL statements
* How to read executions plans
Scalable Web Architectures: Common Patterns and Approachesadunne
The document discusses scalable web architectures and common patterns. It covers topics like what scalability means, different types of architectures, load balancing, and how components like application servers, databases, and other services can be scaled horizontally to handle increased traffic and data loads. The presentation is given in 12 parts that define scalability, discuss myths, and describe scaling strategies for application servers, databases, load balancing, and other services.
Low Latency SQL on Hadoop - What's best for your clusterDataWorks Summit
This document compares different SQL engines for Hadoop including Impala, Hive, Shark, and Presto. It summarizes performance benchmarks showing Impala and Shark to be the fastest. It also describes the architectures of each engine and how they integrate with Hadoop components like YARN. Impala runs queries directly on the cluster while others like Hive rely on Tez to optimize query plans. The document concludes that while Shark can outperform Hive, it lacks vendor support, and Presto is still immature though easy to deploy.
Microsoft SQL Server internals & architectureKevin Kline
From noted SQL Server expert and author Kevin Kline - Let’s face it. You can effectively do many IT jobs related to Microsoft SQL Server without knowing the internals of how SQL Server works. Many great developers, DBAs, and designers get their day-to-day work completed on time and with reasonable quality while never really knowing what’s happening behind the scenes. But if you want to take your skills to the next level, it’s critical to know SQL Server’s internal processes and architecture. This session will answer questions like:
- What are the various areas of memory inside of SQL Server?
- How are queries handled behind the scenes?
- What does SQL Server do with procedural code, like functions, procedures, and triggers?
- What happens during checkpoints? Lazywrites?
- How are IOs handled with regards to transaction logs and database?
- What happens when transaction logs and databases grow or shrinks?
This fast paced session will take you through many aspects of the internal operations of SQL Server and, for those topics we don’t cover, will point you to resources where you can get more information.
Making MySQL highly available using Oracle Grid InfrastructureIlmar Kerm
Making MySQL highly available using Oracle Grid Infrastructure
The document discusses using Oracle Grid Infrastructure (GI) to make MySQL highly available. Key points:
- GI provides infrastructure like virtual IPs, storage, and monitoring to enable high availability of databases and applications.
- Custom scripts are used to integrate MySQL instances as GI resources and control their startup, shutdown, and monitoring.
- ACFS file systems provide shared storage for MySQL data directories across nodes.
- Resources like virtual IPs and ACFS file systems have dependencies defined to control startup order.
- Monitoring and control of MySQL instances is done through the GI console and scripts.
eProseed Oracle Open World 2016 debrief - Oracle 12.2.0.1 DatabaseMarco Gralike
The document provides an overview of new features in Oracle Database 12.2 including multitenant improvements like application containers and proxy PDBs, in-memory database enhancements, new JSON functions and dataview, and Oracle Exadata Express. It also briefly mentions big data integrations and notes that documentation is available online for Exadata Express and new JSON and database features.
Upgrade Without the Headache: Best Practices for Upgrading Hadoop in ProductionCloudera, Inc.
This document discusses best practices for upgrading Hadoop clusters with Cloudera Manager. It describes how the Cloudera Manager upgrade wizard provides a simplified, guided process for upgrading Hadoop distributions with minimal downtime. The upgrade wizard automates many of the manual steps previously required for upgrades and allows rolling upgrades for non-major upgrades when certain conditions are met. Following best practices like testing upgrades in non-production environments and having backup policies in place can help avoid issues during upgrades.
This document discusses the challenges of implementing SQL on Hadoop. It begins by explaining why SQL is useful for Hadoop, as it provides a familiar syntax and separates querying logic from implementation. However, Hadoop's architecture presents challenges for matching the functionality of a traditional data warehouse. Key challenges discussed include random data placement in HDFS, limitations on indexing due to this random placement, difficulties performing joins without data colocation, and limitations of existing "indexing" approaches in systems like Hive. The document explores approaches some systems are taking to address these issues.
SQL Developer isn't just for...developers!
SQL Developer doubles the features available to the end user with the DBA panel, accessible from the View menu.
The document provides tips and tricks for scripting success on Linux. It begins with introducing the speaker and emphasizing that the session will focus on best practices for those already familiar with BASH scripting. It then details various tips across multiple areas: setting the shell and environment variables, adding headers and comments to scripts, validating input, implementing error handling and debugging, leveraging utilities like CRON for scheduling, and ensuring scripts continue running across sessions. The tips are meant to help authors write more readable, maintainable, and reliable scripts.
This document provides an introduction to Linux and shell scripting, outlining what Linux is, who developed it, how to get and install Linux, where it can be used, and an overview of shells and shell scripts. It describes the organization of the tutorial and what makes it different from other resources on the topic. The first chapter introduces basic concepts around Linux and shell scripting.
Gigigo Workshop - Create an iOS Framework, document it and not die tryingAlex Rupérez
The document provides steps for creating an iOS framework, including:
1) Setting up fast iterative builds and infrequent distribution builds for the framework project.
2) Ensuring headers, resources, and setup for third-party developers are easy to use.
3) Configuring the framework project to copy public headers, disable code stripping, and create a universal binary with a run script build phase.
Software Development Automation With Scripting LanguagesIonela
The Scripting languages are deployed in many operative systems, either in UNIX/Linux or Windows. These languages are developed for general purpose process automation and web programming. But you can consider using them for the software development process in many ways. Among these languages, awk and Perl are suitable for automate and speed up software development for embedded systems, because many embedded systems only have cross tool chain, without powerful IDE supports for process automation.
The document discusses Autoconf and Automake, which are tools used to automatically generate Makefiles and configure scripts from simple descriptions of a project's build requirements. Autoconf generates configure scripts that can build software on different systems by checking for features like libraries, headers, and functions. Automake generates Makefiles from simple descriptions of build targets and dependencies in Makefile.am files. Together, these tools help developers more easily build portable software projects across a variety of Unix systems.
Makefile actually is an old concept from UNIX development. Makefile is based upon compiling rules for a project and improve the project development efficiency. In a big project, there are many files in different folders. Of course you can write a DOS batch file to build whole project. But makefile can judge which steps should be done first, which steps can be ignored, and even more complicated goals. All of these are decided by the rules in makefile, instead of manually specified.
This document provides an overview of shell scripting in Linux. It discusses why shell scripts are used, defines what a Linux shell is, lists common shell types, and how to execute scripts. Basic shell script examples and applications are given. Advantages of shell scripts include quick development time and ability to automate tasks, while disadvantages are slower execution and error prone nature compared to other languages.
This document provides an overview of aspect-oriented programming (AOP) in Perl using the Aspect.pm module. It defines key AOP terminology like join points, pointcuts, advice, and aspects. It describes the features of Aspect.pm like creating pointcuts with strings, regexes, or code references to select subroutines, and writing before, after, and around advice. Examples show creating reusable aspects for logging, profiling, and enforcing design patterns.
Introduction to Ruby on Rails by Rails Core alumnus Thomas Fuchs.
Originally a 3-4 hour tutorial, 150+ slides about Rails, Ruby and the ecosystem around it.
This document provides an introduction to C++ and Java programming languages. It discusses key aspects of C++ like its origins as an extension of C, support for object-oriented programming, keywords, identifiers, comments, and compiler directives. It also covers programming style best practices. For Java, it outlines its origins, characteristics, principles, examples, editions, and the authors. It provides details on Java's portability, security, simplicity, performance and object-oriented nature.
This document provides a summary of best practices for DevOps as outlined by Erik Osterman of Cloud Posse. It discusses practices across organizational structure, software development, infrastructure automation, monitoring and security. Some key best practices include: establishing a makers culture with uninterrupted focus time for developers; using containers for local development environments and tools; strict branch protection and pull requests for code changes; immutable infrastructure with infrastructure as code; actionable alerts and post-mortems for monitoring; and identity-aware access, temporary credentials, and multi-factor authentication for security. The document aims to share proven strategies that help achieve reliability, speed, ease of use and affordability of systems.
Kohana 3.2 documentation is compiled into a single page by Xavi Esteve. It describes Kohana as an open source PHP MVC framework that aims to be swift, secure, and small. The documentation covers what makes Kohana great, contributing to documentation, unofficial documentation, installing Kohana from GitHub or a stable release, and conventions for class names, coding standards, and more.
This document provides an overview of basic Linux administration skills needed to prepare for the Linux Professional Institute's 101 certification exam. It discusses regular expressions for searching files, the Filesystem Hierarchy Standard for organizing directories, and tools for finding files on a Linux system. The tutorial covers using regular expression metacharacters like ., [], *, ^, and $ to match patterns in text. It explains the FHS specifications for separating shareable, unshareable, static, and variable files into directories like /usr, /etc, /var, and /opt. Finally, it introduces finding files using the PATH, locate, find, and whereis commands.
Joomla! Day Chicago 2011 Presentation - Steven PignataroSteven Pignataro
The document provides tips and best practices for developing Joomla sites as part of a team. It discusses using version control like SVN or Git, following coding standards for naming conventions and formatting, and leveraging tools for code review and team development. Additional suggestions are given for debugging, moving sites, testing for injections, and speeding up sites through techniques like removing Mootools and using content delivery networks. The presenter encourages sharing ideas to improve Joomla development.
This presentation shows how to use CMake to probe the platform (operating system/environment) and compiler to identify required or optional language/platform features. A complete example is shown for adapting a program to discovered features.
The document provides an introduction to using the Linux command line. It discusses commands like echo and exit, environment variables, and command sequences. The summary covers setting environment variables, gathering system information using basic Linux commands, and making commands conditional using && and || operators.
This document is an introduction to Linux fundamentals and preparing for the Linux Professional Institute's 101 exam. It covers using the bash shell to navigate directories and view file listings, including the use of absolute and relative paths. It also discusses special directories like ., .., and ~, as well as interpreting permissions and other details from long directory listings using the ls command. The goal is to provide readers with a solid foundation in basic Linux concepts.
One Click Provisioning With Enterprise Manager 12cJosh Turner
Enterprise Manager 12c can provision a new WebLogic environment in less than 30 minutes. The presentation watched a live demo of provisioning a fully functional WebLogic instance on a clean Oracle Linux install. It covered preparing the host, adding it to Enterprise Manager, provisioning the environment using a gold image, and customizing the provisioning process to automatically install prerequisites and restart services. Behind the scenes, it uses provisioning profiles based on golden images, scripts like preinstall.sh to copy files and install packages, and directives to define the provisioning process.
Red Hat Linux Certified Professional step by step guide Tech ArkitRavi Kumar
Introduction to course outline and certification
Managing files & directories
Basic Commands ls, cp, mkdir, cat, rm and rmdir
Getting help from using command line (whatis, whereis, man, help, info, –help and pinfo)
Editing Viewing of text files (nano, vi and vim)
User Administration Creating, Modifying and Deleting
Controlling services & daemons
Listing process
Prioritize process
Analyze & storing logs
Syslog Server & Client configuration
Compressing files & directories (tar and zip)
Copying files & directories to remote servers
Yum & RPM
Search files and directories
File & Directory links (Soft Links and Hard Links)
Managing of physical storage
Logical Volume Manager
Access Control List (ACL)
Scheduling of future Linux tasks
SELinux
NFS Server and Client configuration
Firewall
Securing the NFS using kerberos
LDAP client configuration
Setting UP ldap users home directory
Accessing the network storage using (CIFS) samba
Samba Multiuser Access
Using Virtualized systems
Creating virtual Machines
Automated installation of Redhat Linux
Automated Installation using Kickstart
Linux Booting Process
Root password Recovery
Fixing Partition Errors – Using Enter into Emergency Mode
Using Regular Expressions with grep
Understand and use essential tools for handling files, directories, command-line environments, and documentation
Operate running systems, including booting into different run levels, identifying processes, starting and stopping virtual machines, and controlling services
Configure local storage using partitions and logical volumes
Create and configure file systems and file system attributes, such as permissions, encryption, access control lists, and network file systems
Deploy, configure, and maintain systems, including software installation, update, and core services
Manage users and groups, including use of a centralized directory for authentication
Manage security, including basic firewall and SELinux configuration
Configuring static routes, packet filtering, and network address translation
Setting kernel runtime parameters
Configuring an Internet Small Computer System Interface (iSCSI) initiator
Producing and delivering reports on system utilization
Using shell scripting to automate system maintenance tasks
Configuring system logging, including remote logging
Configuring a system to provide networking services, including HTTP/HTTPS, File Transfer Protocol (FTP), network file system (NFS), server message block (SMB), Simple Mail Transfer Protocol (SMTP), secure shell (SSH) and Network Time Protocol (NTP)
Similar to Pass Summit Linux Scripting for the Microsoft Professional (20)
This are my keynote slides from SQL Saturday Oregon 2023 on AI and the Intersection of AI, Machine Learning and Economnic Challenges as a Technical Specialist
This document discusses migrating high IO SQL Server workloads to Azure. It begins by explaining that every company has at least one "whale" workload that requires high CPU, memory and IO. These whales can be challenging to move to the cloud. The document then provides tips on determining if a workload's issue is truly high IO or caused by another factor. It discusses various wait events that may indicate IO problems and tools for monitoring IO performance. Finally, it covers some considerations for IO in the cloud.
This document provides an overview of options for running Oracle solutions on Microsoft Azure infrastructure as a service (IaaS). It discusses architectural considerations for high availability, disaster recovery, storage, licensing, and migrating workloads from Oracle Exadata. Key points covered include using Oracle Data Guard for replication and failover, storage options like Azure NetApp Files that can support Exadata workloads, and identifying databases that are not dependent on Exadata features for lift and shift to Azure IaaS. The document aims to help customers understand how to optimize their use of Oracle solutions when deploying to Azure.
This document provides guidance and best practices for migrating database workloads to infrastructure as a service (IaaS) in Microsoft Azure. It discusses choosing the appropriate virtual machine series and storage options to meet performance needs. The document emphasizes migrating the workload, not the hardware, and using cloud services to simplify management like automated patching and backup snapshots. It also recommends bringing existing monitoring and management tools to the cloud when possible rather than replacing them. The key takeaways are to understand the workload demands, choose optimal IaaS configurations, leverage cloud-enabled tools, and involve database experts when issues arise to address the root cause rather than just adding resources.
This document discusses strategies for managing ADHD as an adult. It begins by describing the three main types of ADHD - inattentive, hyperactive-impulsive, and combined. It then lists some of the biggest challenges of ADHD like executive dysfunction, disorganization, lack of attention, procrastination, and internal preoccupation. The document provides tips and strategies for overcoming each challenge through organization, scheduling, list-making, breaking large tasks into small ones, and using technology tools. It emphasizes finding accommodations that work for the individual and their specific ADHD presentation and challenges.
This document provides guidance and best practices for using Infrastructure as a Service (IaaS) on Microsoft Azure for database workloads. It discusses key differences between IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). The document also covers Azure-specific concepts like virtual machine series, availability zones, storage accounts, and redundancy options to help architects design cloud infrastructures that meet business requirements. Specialized configurations like constrained VMs and ultra disks are also presented along with strategies for ensuring high performance and availability of database workloads on Azure IaaS.
Kellyn Gorman shares her experience living with ADHD and strategies for turning it into a positive. She discusses how ADHD impacted her childhood and how it still presents challenges as an adult. However, with the right tools and understanding of her needs, she is able to find success. She provides tips for organizing, prioritizing tasks, managing distractions, and accessing support. The key is learning about ADHD and how to structure one's environment and routine to play to one's strengths rather than fighting against the condition.
This document discusses overcoming silos when implementing DevOps for a new product at a company. The teams involved were dispersed globally and siloed in their tools and processes. Challenges included isolating workload sizes, choosing a Linux image, and team ownership issues. The solution involved aligning teams, automating deployment with Bash scripts called by Terraform and Azure DevOps, and evolving the automation. This improved communication, decreased teams from 120 people to 7, and increased deployments and profits for the successful project.
This document discusses the future of data and the Azure data ecosystem. It highlights that by 2025 there will be 175 zettabytes of data in the world and the average person will have over 5,000 digital interactions per day. It promotes Azure services like Power BI, Azure Synapse Analytics, Azure Data Factory and Azure Machine Learning for extracting value from data through analytics, visualization and machine learning. The document provides overviews of key Azure data and analytics services and how they fit together in an end-to-end data platform for business intelligence, artificial intelligence and continuous intelligence applications.
This document discusses techniques for optimizing Power BI performance. It recommends tracing queries using DAX Studio to identify slow queries and refresh times. Tracing tools like SQL Profiler and log files can provide insights into issues occurring in the data sources, Power BI layer, and across the network. Focusing on optimization by addressing wait times through a scientific process can help resolve long-term performance problems.
This document discusses connecting Oracle Analytics Cloud (OAC) Essbase data to Microsoft Power BI. It provides an overview of Power BI and OAC, describes various methods for connecting the two including using a REST API and exporting data to Excel or CSV files, and demonstrates some visualization capabilities in Power BI including trends over time. Key lessons learned are that data can be accessed across tools through various connections, analytics concepts are often similar between tools, and while partnerships exist between Microsoft and Oracle, integration between specific products like Power BI and OAC is still limited.
Mentors provide guidance and support, while sponsors use their influence to advocate for and promote a protege's career. Obtaining both mentors and sponsors is important for advancing in one's field and overcoming biases, yet women often have fewer sponsors than men. The document outlines strategies for how women can find and work with sponsors, and how men can act as allies in supporting women. Developing representation of women in technology fields through mentorship and sponsorship can help initiatives become self-sustaining over time.
Kellyn Pot'Vin-Gorman presented on GDPR compliance. Some key points include:
- GDPR went into effect in May 2018 and covers any data belonging to an EU citizen.
- Fines for non-compliance can be up to 4% of annual revenue or €20 million.
- DBAs play a role in identifying critical data, auditing processes, and reporting on compliance.
- An AI tool assessed the privacy policies of 14 major companies and found they all failed to meet GDPR requirements.
- Achieving compliance requires security frameworks, data mapping, encryption, access controls, and dedicated teams.
This document provides tips for optimizing performance in Power BI by focusing on different areas like data sources, the data model, visuals, dashboards, and using trace and log files. Some key recommendations include filtering data early, keeping the data model and queries simple, limiting visual complexity, monitoring resource usage, and leveraging log files to identify specific waits and bottlenecks. An overall approach of focusing on time-based optimization by identifying and addressing the areas contributing most to latency is advocated.
Kellyn Pot’Vin-Gorman discusses DevOps tools for winning agility. She emphasizes that while many organizations automate testing, the DevOps journey is longer and involves additional steps like orchestration between environments, security, collaboration, and establishing a culture of continuous improvement. She also stresses that organizations should not forget about managing their data as part of the DevOps process and advocates for approaches like database virtualization to help enhance DevOps initiatives.
The document discusses various Linux system monitoring utilities including SAR, SADC/SADF, MPSTAT, VMSTAT, and TOP. SAR provides CPU, memory, I/O, network, and other system activity reports. SADC collects system data which SADF can then format and output. MPSTAT reports processor-level statistics. VMSTAT provides virtual memory statistics. TOP displays active tasks and system resources usage.
This document provides an overview of various Linux performance monitoring and tuning tools. It discusses tools such as PIDSTAT, DSTAT, NMON, LSOF, and FUSER which can provide process-level insights into CPU, memory, disk and network usage. It also covers more advanced tracing tools like Trace-cmd, perf-tools, and eBPF which utilize capabilities in the Linux kernel for deep performance analysis and troubleshooting. The document emphasizes that these tools present options for system visibility without heavy overhead or specialized knowledge requirements.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
Comparison Table of DiskWarrior Alternatives.pdfAndrey Yasko
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
3. everything PASS
has to offer
Free online
webinar events
Free 1-day local
training events
Local user groups
around the world
Online special
interest user groups
Business analytics
training
Get involved
Free Online Resources
Newsletters
PASS.org
Explore
4. Kellyn Gorman
Azure Data Platform
Architect, Microsoft
Azure Data Platform Architect
Kellyn has been with Microsoft for over a year
now working in the Analytics and AI team in
Higher Education but spends a percentage of
her time migrating large Oracle environments
over to Azure bare metal.
Blogger, Author, Speaker
Kellyn writes on two of the top 50 database
blogs in the world, known for Oracle and
Microsoft technical content, has written five
books, including one on Diversity and
Inclusion. She mentors, sponsors and speaks
in both the Oracle and Microsoft communities
as part of giving back to the community.
President, Denver SQL Server User
Group
Kellyn has been the president for over two
years now, continuing to support this
incredible user group while on the road in her
RV, traveling the US.
@DBAKevlar
https://www.linkedin.com/in/kellyngorman/
Kellyn.Gorman@Microsoft.com
6. What This Session Is….
• Teach you the basics around Linux, (aka bash) scripting.
• Play along if you want, just need a Linux machine to log into or
log onto.
• Scripts and slides will be available post the session on
https://github.com/Dbakevlar/Summit2019
• This includes a VI(M) Cheatsheet!
7. One Way: Azure Cloud Shell
https://docs.microsoft.com/en-us/azure/cloud-shell/overview
Supports both BASH and PowerShell
Can be used with persistent cloud storage
8. Choose Wisely How You Author Your
Scripts
Scripts should be easy to:
• Read
• Edit
• Execute
8
9. Even if You Don’t
Already Know
How to BASH…
1. The following tips are good to
consider in any scripting language
when available
2. Are good practice to be a good
coding team member
3. May save your life some day, (or
keep you from getting killed by your
team members… )
9
10. Writing a Script Should be…
Like writing a paper. It should include the following:
• An Introduction
• A Body
• A Conclusion
10
12. Set the Shell to Use
Find out which shell is in use:
which bash
Setting it in your script is done at the very first line of your script:
#!/bin/bash
OR
#!/bin/sh -C Shell
#!/bin/ksh -Korn Shell
For many Linux machines, there may be more than one:
13. What Happens If You Don’t?
./<script name>/sh <arg1> <arg2>
Without shell set in script:
/bin/bash ./<script name>/sh <arg1> <arg2>
The script must state what shell is to be used with the script EVERY TIME.
Normal Execution with the shell set in the script:
14. Exit When Mistakes are Made
Added at the top of the script under the designation of shell
set –e
set –o errexit
Saves from clean up, easier to recover from.
15. Also Exit if Undeclared Variables, etc.
Require declarations to be set completely or the script exits:
set -o nounset
OR
set –u
set -euo pipefail
Blank answers for variables, (arguments) can leave a script
to execute incorrectly or worse.
16. Add Debugging to Your Script
Want to know what went wrong?
#!/bin/bash
set -vnx
Arguments, (any or all) to be used:
Argument What it Does
-v Verbose mode- shows all lines as they are parsed by the
execution.
-n For syntax checking. The script doesn’t actually execute.
-x Shell tracing mode- will step through each step and report
any errors
17. Set up Alias’ and Environment Variables
Create .profile with a unique extension, (.profile_sql19, .profile_net) to support
unique applications.
This cuts down on significant variable setting and coding, requiring only one
location to update/manage.
Update the .bashrc with global alias’ and environment
variables that support anything that is used by the login
regularly.
18. Write a Header for your Script
The # sign can help create a header and signal BASH that it’s for
informational purposes only:
####################################################
# Title: summit_demo.sh #
# Purpose: Summit Demo script for Linux #
# Author: Kellyn Gorman #
# Notes: Script will need three arguments. #
####################################################
19. Four Choices in Passing Environment
Variables
1. Declaration hard-coded in script
2. Passed as part of execution command for script
3. Interactively read as part of script execution
4. Dynamically generated from other values in script
19
20. Choose Wisely
20
Variable Type Pro Con
Hard-coded No typos Static, no interaction
Passed during
execution
More interactive and code
is more dynamic, great for
automation
Can suffer typos, no hints
of values required
Interactively read as
part of execution
Very interactive and can be
prompted with
hints/options
Requires interaction and
not made for scheduling
or automation
Dynamically
generated from
other values
Dynamically happens,
requires no/little input
from users. Excellent for
automation
Little/no control over
values, dependent on
values passed or existing
from sources.
21. Start of our Script
21
#!/bin/bash
#############################################
# Script Name: summit_demo_int.sh #
# Author: Kellyn Gorman #
# Usage: For Linux Scripting Demo #
# Notes: Hard coded values to begin #
#############################################
export dir_name = summitdir
export file_name = summit.lst
export log_name = summit.log
summit_demo_int.sh
22. How to use Variables Once Declared in
a Script
dir_name becomes $dir_name
file_name becomes $file_name
…and so on…
Any variables in the .bashrc or .profile can be used in scripts
and if reused often, should be considered set in these files.
22
23. Move from Hard Coding to Passing
Variables at the Execution
Changed at the Introduction of the script
Makes script more robust and flexible
Makes script reusable
Can be done multiple ways
• Set at session/logon
• Set as part of script execution
23
24. Start of our Script
24
#!/bin/bash
#############################################
#############################################
export dir_name = <generic dir path>
export file_name = <static file name>
export log_name = <static log name>
Export the summitdir variable and run the summit_demo_arg.sh:
export dir_name=summitdir
./summit_demo_arg.sh
summit_demo_arg.sh
25. Update the Script to Interactive Values
export dir_name = <dir path>
export file_name = <generic_file_name>
export log_name = <generic_log_name>
To execute, we would change to execution path ./<script name> $1
Export the environment variables as part of a profile or part
of your .bashrc OR add to the script/session. This is one
more way to make the script easier to use.
25
26. Start of our Script
26
#!/bin/bash
export acro=$1
#############################################
#############################################
# Directory is already set, just need to set the file/log name
# Note, the files are unique to the script, but generic to the
environment session:
export dir_name=${dir_name}/${acro}<dir_name>
export log_name = ${dir_name}/${acro}<log_name>
summit_demo_val.sh
28. Pass Dynamic Values into our Script,
Step 2
# Initialize parameters specified from command line
while getopts ":f:l:" arg; do
case "${arg}" in
f)
filename=${OPTARG}
;;
l)
logname=${OPTARG}
;;
esac
Done
shift $((OPTIND-1))
28
29. Pass Dynamic Values into our Script,
Step 3
if [[ -z "$filename" ]]; then
echo “Type in the name of your file:"
read filename
[[ "${filename:?}" ]]
fi
if [[ -z "$logname" ]]; then
echo “Type in your logname:”
read logname
[[ "${logname:?}" ]]
fi
29
30. Script has changed on how we setup our
introduction, but body stays the same….
Save your file, (“Esc, :q!” in VIM/VI, which is listed in your VI(M)
cheat sheet)
30 summit_demo_dyn.sh
31. Test your Environment Variables
Use the scripts you’ve created
Make sure they are executable:
chmod 744 *.sh
Run each of the scripts
Test the variables, are they set?
echo $<variable name>
Do this for each script for each variable…
31
33. The Body of the Script
This is where the code will perform the work as part of the main purpose of
the script.
As when writing a paper, this will be the largest section of your
script.
Start simple
Add debugging and error exit options as you build out your
body.
*Consider building a script as functions- easier to manage and test.
33
34. Don’t Leave Others in the Dark
Write in ideas for enhancements
Help explain the logic
Use the # sign to signal your script that it’s a comment.
# This step builds out the database logical objects.
# If the variables aren’t entered, the script will exit.
Comments in your scripts
35. Goal of Script
• Create a directory, (mkdir)
• Create an empty file, (touch)
• Confirm the creation of the directory with a list, (ls) and write
to a log file, ( >)
• Confirm the creation of the file with a list, (ls) and append to a
log file, (>>)
35
36. Create the Body, (Post the Variables)
36
export dir_name = summitdir
export file_name = ${summitdir}/summit.lst
export log_name = ${dir_name}/summit.log
# Create new directory
mkdir ./${summitdir}
# Create empty file
touch ./${file_name}
# Verify that directory and file exist
ls ./${summitdir} > $log_name
ls ./${file_name} >> $log_name
37. Don’t Make Users Guess
If there is a step that requires interaction between your script and
the user, make it clear what is required to promote success:
This can be done using the ECHO command and placing the
statement inside quotes.
echo “Please enter the name of the user:”
37
38. Don’t throw away your other scripts
Just as with .Net, Java, Perl, etc., you can run PowerShell
scripts from BASH:
pwsh <script name>
You worked hard on scripts or an existing example already
exists. Don’t recreate the wheel and consider reusing them,
calling them from your BASH script.
39. Build Out Functions
• Functions allow you to group commands and execute
them as part of a function name.
• Write in any order, execute in the order you want
• Place execution at the end of the script to make it easy
for managing, testing and understanding steps in a script
39
40. Example of Function
# Function Bodies:
function quit {
exit
}
function hello {
echo Hello!
}
# Execute Function:
hello
quit
40
41. # First function touch files
function touch_func {
touch ${file}
touch ${log_file}
}
# Second function to Verify that directory and file exist
function write_log_func {
pwd ${home}/${dir_name} > $log_file
ls -ltr ${file} >> $log_file
ls -ltr ${log_file} >> $log_file
}
41
Our Functions
42. Function for Last Step- Commented
Out!
function clean_func {
rm -rf $dir_name/*
rmdir $dir_name
}
Be careful with rm –rf!!
42
44. If Using Functions
Execute Functions and then-
• Add any last logging steps
• Clean up any files
• Email log files or notifications
44
45. Complete our Script
# Conclusion
# Execute Functions and clean up
touch_func
write_log_func
#clean_func # function is commented out to
begin!
echo “Script has completed” >> $log_name
45
46. Always do Clean Up and Notify
Completion
• Remove any files that were created for script.
• Parse log files for success or errors.
• Report on success or errors.
• Notify the script has finished, successfully or even if it
hasn’t.
46
47. Executing Functions
Functions are executed as part of the conclusion and…
• Makes it easier to test and work with sections of scripts.
• Check out the full script: summit_demo_func.sh
• Run Script with CLEAN function commented out!
47
48. Test out the Scripts!
• Note the differences
• Note how more commands would be built in.
• Notice logging and how commands can be used as well as
checks.
• More advance utilities/commands: AWK, GREP, SED to do
advanced filtering and searching.
• Use email utilities like sendmail for notifications, alerting
48
49. Summary
• Learn Vi/Vim, Nano or another editor to make it easy to write
scripts in the Linux terminal.
• Use best practices from the beginning so as scripts mature,
already easy for others to read, manage and use.
• Use dynamic values to make code reusable
• Use functions to make easier to manage and test scripts
• Use exit codes for variables and errors to keep scripts from
running without the right information.
49
50. If You Want to Learn More:
Blog Posts, Writing Shell Scripts, Parts 1-4
PASS Summit Session, Empowering the SQL Server
Professional with Linux Scripting- Thursday, Nov. 7th,
1:30pm, RM 611-614
Web Tutorials, Linux Shell Scripting
Edx Class, Linux Command Line Basics
Linux Scripting Class, Linux Tutorials
50
51. Session
Evaluations
Submit by 5pm Friday,
November 15th to
win prizes.
Download the GuideBook App
and search: PASS Summit 2019
Follow the QR code link on session
signage
Go to PASSsummit.com
3 W A Y S T O A C C E S S
[Moderator Part]
This 24 Hours of PASS session is presented by Kellyn Pot'Vin-Gorman. Kellyn is a member of the Oak Table Network and an Idera ACE and Oracle ACE Director alumnus. She is a Data Platform Architect in Power BI with AI in the EdTech group at Microsoft. Kellyn is known for her extensive work with multi-database platforms, DevOps, cloud migrations, virtualization, visualizations, scripting, environment optimization tuning, automation, and architecture design. Kellyn has spoken at numerous technical conferences for Oracle, Big Data, DevOps, testing, and SQL Server. Her blog (http://dbakevlar.com) and social media activity under her handle, DBAKevlar, is well respected for her insight and content.
[move to next slide]
Lets the usage be set, there must be two arguments to pass in the declaration or exit.
Let the script know that values will be passed in for the arguments in the following section.
Asking our questions of the person executing the script