Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Performance Testing in the Agile LifecycleLee Barnes
Traditional large scale end-of-cycle performance tests served enterprises well in the waterfall era. However, as organizations transition to agile development models, many find their tried and true approach to performance testing—and their performance testing resources—becoming somewhat irrelevant. The strict requirements and lengthy durations just don’t fit in the context of an agile cycle. Additionally, investigating system performance at the end of the development effort misses out on the early stage feedback offered by an agile approach. And it’s more important than ever that today’s agile-built systems perform. So how can agile organizations ensure optimum performance of their business critical systems? Lee Barnes discusses why agile teams need to change their thinking about performance from a narrow focus on testing to a broader focus on analysis—from a people, process and technology perspective. Take back techniques for shifting your performance testing/analysis earlier in the development cycle and extracting performance data that is immediately actionable.
Addressing Performance Testing Challenges in Agile- Impetus WebinarImpetus Technologies
This document discusses challenges with performance testing in Agile development and proposes solutions. It outlines traditional waterfall performance testing versus an Agile approach. Key aspects of an Agile performance testing process include continuous performance management through automation, performance testing in each sprint, and identifying bottlenecks early. The document provides examples of implementing this approach successfully for bill payment and digital mailbox projects.
Load and Performance tests in agile scrum framework SGI 2013Subrahmaniam S.R.V
Load and Performance tests in Agile Scrum framework. Presented in Scrum Gathering India 2013 at Pune on July 26th 2013.
Presented by myself and S. Ravindra
This document discusses performance testing challenges for an agile development team working on a performance critical Java application. It estimates that manually executing performance tests against 9 configurations would take 1+ man-months. To address this, it evaluates options like adding more performance engineers, limiting tests and configurations, or automating performance testing. It recommends automating testing for benefits like running tests continuously and allowing small teams to efficiently test performance. The case study details how the team automated testing using JMeter, built a process integrated with TeamCity, and upgraded infrastructure to support concurrent testing. Automation reduced the testing cycle from over 1 man-month to 4 days, allowing more time for analysis and new testing while finding 17 issues.
Agile software development focuses on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. It advocates adaptive planning, evolutionary development, early delivery, continuous improvement, and encourages rapid and flexible response to change. Key aspects include iterative delivery in short cycles, active user involvement, minimal documentation, pair programming, test-driven development, and continuous refactoring to improve design. Popular agile methods include extreme programming (XP), which emphasizes feedback and refactoring, and Scrum, which utilizes sprints for iterative delivery and emphasizes daily stand-ups and product backlogs.
Load testing involves systematically stressing a system or application to determine its behavior and stability under different load conditions. There are different types of load tests that can be run depending on the test goals. It is important to measure key metrics like response times, failures, and system resource usage during a load test to understand the system's performance limits and how it degrades as load increases. Load test results should indicate the maximum number of users the system can support while meeting performance requirements as well as insights into how the system will perform as usage grows over time.
Incorporating Performance Testing in Agile Development ProcessMichael Vax
This presentations explains different aspects of software performance testing and give actionable recommendations on how to integrate it into the Agile Software development process
Addressing Performance Testing Challenges in Agile: Process and Tools: Impetu...Impetus Technologies
Register at http://lf1.me/ocb/
Impetus webinar on ‘Addressing Performance Testing Challenges in Agile: Process and Tools‘
Date: July 3 (10 am PT / 1 pm ET)
Test automation lessons from WebSphere Application ServerRobbie Minshall
The document discusses WebSphere testing at IBM. It provides an overview of IBM's:
- Extensive testing resources including over 200 engineers and thousands of systems.
- Daily regression testing of over 1.7 million tests.
- Transition from waterfall to agile development which reduced cycle times and resources needed for testing.
- Use of cloud resources to speed up test deployment and automation.
- Focus on creating meaningful regressions through techniques like integration acceptance tests run continuously on each build.
The document discusses the journey of implementing continuous integration (CI) practices. It describes initial frustrations with ad hoc builds and lack of standards. A council was formed including managers and developers to address threats, opportunities and plan implementation. Automation tools were adopted, including Cruisecontrol, PHPUnit, phpDocumentor, PHP_Codesniffer, and others to enable automated builds, testing, documentation and metrics. Jenkins was later adopted for its improved installation, configuration and support for multiple languages. SonarQube was also used for continuous analysis and quality management. Implementing a CI culture involved adopting development models, scaling the build process, code reviews and improving communication.
Performance is a key aspect when developing an application, but for developers, production performance usually is a black box. When production problems arise, a lack of insight into log files and performance metrics forces us to reproduce issues locally before we can start to tackle the root cause. Using real world examples, we show how a unified performance management platform helps teams across the lifecycle to monitor applications, detect problems early on, and collect data that enables developers to efficiently solve problems.
The document discusses context-driven performance testing. It advocates for early performance testing using exploratory and continuous testing approaches in agile development. Testing should be done at the component level using various environments like cloud, lab, and production. Load can be generated through record and playback, programming, or using production workloads. Defining a performance testing strategy involves determining risks, appropriate tests, timing, and processes based on the project context. The strategy is part of an overall performance engineering approach.
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...Alexandru Ersenie
A presentation of how load and performance testing can be done in the J2EE world using open source tools
You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)
Top Ten Secret Weapons For Agile Performance TestingAndriy Melnyk
This document outlines top secret weapons for agile performance testing. It discusses making performance explicit, having performance testers work as part of development teams, driving performance tests with customer requirements, taking a disciplined scientific approach to analyzing test results, starting performance testing early in projects, automating performance test workflows, and getting frequent feedback to iteratively improve.
SDLC is a framework defining tasks performed at each step in the software or system development process. It aims to produce high quality system that meets or exceeds customer expectations, work effectively and efficiently in the current and planned information technology infrastructure, and is inexpensive to maintain and cost effective to enhance.
This presentation includes different stages of Software Deveolopment.
Life-Cycle Phases
Engineering and Production Stages
Inception Phase
Elaboration Phase
Construction Phase
Transition Phase
Artifacts of the Process
The Artifact Sets
Management Artifacts
Engineering Artifacts
Pragmatic Artifacts
Model-based software Architectures
Architecture: A Management Perspective
Architecture: A Technical Perspective
Workflows of the Process
Software Process Workflows
Iteration Workflows
Checkpoints of the Process
Major Milestones
Minor Milestones
Periodic Status Assessments
The document provides an overview of key concepts in software testing and quality assurance, including the quality revolution, definitions of software quality factors, the roles of verification and validation, and differences between errors, faults, and defects. It also summarizes common testing objectives, the concept of a test case, issues around complete testing, different testing levels from unit to system, and activities involved in the testing process.
Shift left shift-right performance testing for superior end-user by Arun DuttaSoftware Testing Board
This document discusses shift-left and shift-right performance testing. It defines shift-left testing as starting early in the software development lifecycle from requirements gathering, while shift-right testing refers to testing late, including in production. Comprehensive continuous performance testing covers both approaches from requirements through post-deployment. This helps deliver higher quality software faster by getting performance feedback earlier and monitoring in production.
This document discusses software engineering processes and quality. It states that for a quality software product, both the quality of the product itself and the quality of the software process are important. It also notes that fixing problems later in the development process costs significantly more than earlier phases, so more attention should be paid early on. The document then summarizes Boehm's "Industrial Software Metrics Top 10 List" and discusses Pareto analysis and principles of software maintenance and processes.
This document provides an overview of performance testing. It defines performance testing and how it differs from other types of testing. It then describes various types of performance tests like end-to-end testing, component testing, load testing, and mobile testing. It also discusses performance test assets, the test process, tool selection, best practices, and recommended resources. The overall purpose is to introduce the topic of performance testing and provide guidance on how to approach it.
This document discusses performance assurance for packaged applications like Oracle Enterprise Performance Management. It outlines key steps for performance assurance including defining requirements, designing for best practices, verifying performance during development, testing, and monitoring production. Performance testing is recommended to mitigate risks, though it requires realistic loads and careful scripting. A top-down approach is advocated for performance troubleshooting, examining hardware, configuration, design and logs before suspecting product issues. Examples of common performance problems and their solutions are also provided.
Tools of the Trade: Load Testing - Ignite session at WebPerfDays NY 14Alexander Podelko
Tools of the Trade: Load Testing - an Ignite session at WebPerfDays NY 2014. Some consideration about load testing and selecting load testing tools - as much as could be squeezed into 5 min / 20 slides.
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
- JMeter is an open source load testing tool that can test web applications and other services. It uses virtual users to simulate real user load on a system.
- JMeter tests are prepared by recording HTTP requests using a proxy server. Tests are organized into thread groups and loops to simulate different user behaviors and loads.
- Tests can be made generic by using variables and default values so the same tests can be run against different environments. Assertions are added to validate responses.
- Tests are run in non-GUI mode for load testing and can be distributed across multiple machines for high user loads. Test results are analyzed using aggregated graphs and result trees.
The document provides an overview of software integration and testing. It discusses integration approaches like top-down and bottom-up integration. It also covers various types of testing like unit testing, integration testing, system testing, and user acceptance testing. The document discusses test planning, metrics, tools, and environments. It provides details on defect tracking, metrics, and strategies for stopping testing.
Alexander Podelko - Context-Driven Performance TestingNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
July webinar l How to Handle the Holiday Retail Rush with Agile Performance T...Apica
In this Q&A-style webinar, you'll learn:
1. How and why to load test at least three months prior to the holidays
2. How to integrate CI/CD into your holiday load testing
3. How to determine and evaluate load curves
This document discusses various aspects of software project management and testing. It covers topics like verification and validation, white box and black box testing, unit testing and integration testing. It also discusses testing for web applications, interfaces, security and usability. Parallel testing is discussed as a method to ensure consistency between new and previous software versions.
This document discusses various aspects of software project management and testing. It covers topics like verification and validation, white box and black box testing, unit testing and integration testing. It also discusses testing for web applications, interfaces, security and usability. Parallel testing is discussed as a method to ensure consistency between new and previous software versions.
Holiday Readiness: Best Practices for Successful Holiday Readiness TestingApica
Best Practices for Successful Holiday Readiness Testing: Are you already thinking of, and planning for Black Friday? Learn which load tests to use and why to load test early and often so that you are prepared for the holidays.
- Automating performance tests through continuous integration can provide direct feedback on performance changes after code releases and infrastructure changes. It allows performance issues to be detected and addressed earlier.
- Key best practices include starting with a single important test scenario, focusing on robustness over realism, visualizing trend data over time, and analyzing results to update thresholds and catch regressions.
- The goal is to continuously monitor performance through the pipeline and in production to better understand impacts of changes and flag any performance issues for further investigation. Automated tests complement but do not replace thorough acceptance testing.
This document discusses Viewpoint's approach to web API performance testing. It outlines three key checkpoints: (1) ensuring performance during agile sprints through design reviews and trend monitoring, (2) integrating and testing components from different teams, and (3) performing full regression testing before release. It also defines different types of performance testing and describes the tools and processes used, including load testing with Visual Studio, tracking performance metrics, and using dashboards to socialize goals.
Arthur Hicken Chief Evangelist of Parasoft @ PSQT 2016 discusses:
• What the shift from automated to
continuous means
• How disruption requires changes to how
we test software
• Addressing gaps between Dev and Ops
• Technologies that enable Continuous
Load Testing Best Practices: Application complexity is increasing, yet the stringent requirements for web performance is increasing exponentially. Learn more about the three major types of load testing, determine which you need and how to conduct them.
Faced with two critical goals and low resources, the Kuali Student Test Team developed and implemented a test automation framework based on an open source tool ? Tsung and Amazon?s Elastic Compute Cloud (EC2). This automation framework netted success right away as it allowed the team to identify and regress critical performance issues through out development.
Kuali developers and test engineers, and technical staff from implementing institutions will learn how industry best practices for performance testing were applied to Kuali Student and will walk away with guidance on how to setup and run an open source performance testing tool to support their needs (live demonstration included).
Load testing with Visual Studio and Azure - Andrew SiemerAndrew Siemer
In this presentation we will look at what web performance testing is and the various types of testing that can be performed. We will then dig into Visual Studio 2013 Ultimate to see that the Visual Studio platform is now a real contender in performance testing automation. And we will see how the Visual Studio integration with Visual Studio Online and Azure can take your web performance tests and spin up impressive load tests in a truly useful way.
Performance testing determines how responsive and stable a system is under different workloads. It tests for speed, scalability, stability, and ensures a positive user experience. Types of performance testing include load testing to evaluate behavior under expected loads, and stress testing to find upper capacity limits. Key metrics measured are response time, throughput, and resource utilization. Performance testing involves planning test goals, methodology, implementation, validation of results, and interpreting results. Common tools used for performance testing are JMeter, LoadRunner, and Webload.
DevOps adoption can provide quantifiable returns on investment through improved productivity and quality. Implementing DevOps practices in phases allows organizations to first achieve continuous testing, then continuous delivery, reducing cycle times. Automating processes like builds, testing, and deployments across development, QA and production environments increases staff capacity. Earlier defect detection through practices like "shift left" testing also reduces repair costs. Case studies show potential annual savings of millions from these effects. A DevOps adoption roadmap and workshops can help organizations assess current capabilities and identify high-impact practices to prioritize for their needs.
The document provides an overview of performance testing, including:
- Defining performance testing and comparing it to functional testing
- Explaining why performance testing is critical to evaluate a system's scalability, stability, and ability to meet user expectations
- Describing common types of performance testing like load, stress, scalability, and endurance testing
- Identifying key performance metrics and factors that affect software performance
- Outlining the performance testing process from planning to scripting, testing, and result analysis
- Introducing common performance testing tools and methodologies
- Providing examples of performance test scenarios and best practices for performance testing
Similar to Multiple Dimensions of Load Testing (20)
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
This document summarizes a presentation on web performance given by Alexander Podelko at WebPerfDays New York 2013. The presentation covered performance basics, the importance of considering both front-end and back-end performance, and different approaches to performance risk mitigation. However, the presenter argued that load testing is still needed to complement these approaches, as it is the only way to verify that a system can handle expected load levels and identify potential multi-user issues. Load testing was discussed in more detail, with examples of how it can be used for performance optimization and debugging. The presentation concluded by emphasizing the importance of taking a holistic, end-to-end view of performance.
The document provides a short history of performance engineering, beginning in the 1960s with the introduction of instrumentation tools for mainframe systems and the first studies of human response times. Key developments include the establishment of the performance engineering community in the 1970s, the first commercial performance analysis tools and distributed computing in the late 1970s, and the publication of early books on software performance engineering and applying existing expertise to web performance in the 1990s. The history shows that performance has been an ongoing concern across different computing paradigms, with new challenges arising with each new technology.
Performance Requirements: CMG'11 slides with notes (pdf)Alexander Podelko
Performance requirements should be tracked throughout a system's entire lifecycle, from inception through design, development, testing, operations, and maintenance. However, different groups involved at each stage use their own terminology and metrics, making performance requirements confusing. The document aims to provide a holistic view of performance requirements by discussing key metrics like throughput, response time, and concurrency used across the lifecycle. It also addresses issues like ensuring requirements are defined consistently regardless of changing workloads or system optimizations.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. Still it is important to see a bigger picture beyond stereotypical last-moment load testing. There are different ways to create load; a single approach may not work in all situations. Many tools allow you to use different ways of recording/playback and programming. This session discusses pros and cons of each approach, when it can be used and what tool's features we need to support it.
Performance Requirements: the Backbone of the Performance Engineering ProcessAlexander Podelko
Performance requirements should to be tracked from system's inception through its whole lifecycle including design, development, testing, operations, and maintenance. They are the backbone of the performance engineering process. However different groups of people are involved in each stage and they use their own vision, terminology, metrics, and tools that makes the subject confusing when you go into details. The presentation discusses existing issues and approaches in their relationship with the performance engineering process.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Mitigating the Impact of State Management in Cloud Stream Processing SystemsScyllaDB
Stream processing is a crucial component of modern data infrastructure, but constructing an efficient and scalable stream processing system can be challenging. Decoupling compute and storage architecture has emerged as an effective solution to these challenges, but it can introduce high latency issues, especially when dealing with complex continuous queries that necessitate managing extra-large internal states.
In this talk, we focus on addressing the high latency issues associated with S3 storage in stream processing systems that employ a decoupled compute and storage architecture. We delve into the root causes of latency in this context and explore various techniques to minimize the impact of S3 latency on stream processing performance. Our proposed approach is to implement a tiered storage mechanism that leverages a blend of high-performance and low-cost storage tiers to reduce data movement between the compute and storage layers while maintaining efficient processing.
Throughout the talk, we will present experimental results that demonstrate the effectiveness of our approach in mitigating the impact of S3 latency on stream processing. By the end of the talk, attendees will have gained insights into how to optimize their stream processing systems for reduced latency and improved cost-efficiency.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
Best Practices for Effectively Running dbt in Airflow.pdf
Multiple Dimensions of Load Testing
1. Multiple Dimensions of Load
Testing
Alexander Podelko
alex.podelko@oracle.com
alexanderpodelko.com/blog
@apodelko
Performance & Capacity
2015 by CMG
November 2, 2015
2. Agenda
• Load Testing
• Five [New] Load Testing Dimensions
-Environment
-Load Generation
-Testing Approach
-Life-Cycle Integration
-Feedback and Analysis
2
Disclaimer: The views expressed here are my personal views only and do not necessarily represent those of my
current or previous employers. All brands and trademarks mentioned are the property of their owners.
4. Load Testing Process
Collect Requirements
Define Load
Run Tests
Analyze Results
Done
Modify System
Goals are met
Goals are not met
4
Traditional
View
5. The Stereotype
• Load / Performance Testing is:
– Last moment before deployment
– Last step in the waterfall process
– Protocol level record-and-playback
– Large corporations
– Expensive tools requiring special skills
– Lab environment
– Scale-down environment
– …
5
6. Load Testing
• Traditional load testing is not enough anymore
• New industry trends change a lot
– Cloud
– Continuous Integration / Delivery / Deployment
– DevOps
– Agile
• Some even say that load testing is not needed
anymore
– Due to other ways to mitigate performance risk
6
11. What Else Load Testing Adds
• Performance optimization
– Apply exactly the same load
– See if the change makes a difference
• Debugging/verification of multi-user issues
• Testing self-regulation functionality
– Such as auto-scaling or changing the level of
service depending on load
11
12. So What Is Going On?
• I believe that load testing is here to stay, but
should fully embrace the change
– Not one-time, to become dynamic
• Many things that were practically given became
a hard choice of a continuum of options
(dimension vs. point)
– Environment, Load Generation, Testing Approach,
Life-Cycle Integration, Feedback and Analysis
12
14. Load Testing Process
Collect Requirements
Define Load
Run Tests
Analyze Results
Done
Modify System
Goals are met
Goals are not met
14
Environment ?
15. Deployment
• Lab vs. Service (SaaS) vs. Cloud (IaaS)
– For both the system and load generators
• Test vs. Production
• No best solution, depends on your goals /
system
15
16. Scenarios
• System validation for high load
– Outside load (service or cloud), production system
– Wider scope, lower repeatability
• Performance optimization / troubleshooting
– Isolated lab environment
– Limited scope, high repeatability
• Testing in Cloud
– Lowering costs (in case of periodic tests)
– Limited scope, low repeatability
16
17. Find Your Way
• If performance risk is high it may be a
combination of environments, e.g.
– Outside tests against the production environment
to test for max load
– Lab for performance optimization /
troubleshooting
– Limited performance environments to be used as
part of continuous integration
17
18. Scaling
• Becomes critical as you get to a large number
of virtual users
• The number of supported users per unit of
computing power may differ drastically
– Depending on tool, protocol, scenario, system…
• If you need deploy it on a large number of
machines automation would be helpful
18
20. Load Testing Process
Collect Requirements
Define Load
Run Tests
Analyze Results
Done
Modify System
Goals are met
Goals are not met
20
Create Test Assets
21. Record and Playback: Protocol
Level
Load Testing Tool
Virtual Users
ServerLoad Generator
Application
Network
21
22. Considerations
• Usually doesn't work for testing components
• Each tool support a limited number of
technologies (protocols)
• Some technologies are very time-consuming
• Workload validity in case of sophisticated logic
on the client side is not guaranteed
22
23. Record and Playback: UI Level
23
Load Testing Tool
Virtual
Users
ServerLoad Generator
Application
NetworkBrowsers
24. Considerations
• Scalability
– Still require more resources
• Supported technologies
• Timing accuracy
• Playback accuracy
– For example, for HtmlUnit
24
26. Considerations
• Requires programming / access to APIs
• Tool support
– Extensibility
– Language support
• May require more resources
• Environment may need to be set
26
28. Load Testing Process
Collect Requirements
Define Load
Run Tests
Analyze Results
Done
Modify System
Goals are met
Goals are not met
28
Sounds as running a
fixed set of tests,
doesn’t it?
29. Mentality Change
• Making performance everyone’s job
• Late record/playback performance testing -> Early
Performance Engineering
• System-level requirements -> Component-level
requirements
• Record/playback approach -> Programming to
generate load/create stubs
• "Black Box" -> "Grey Box”
30. Performance Testing
• Usually is not separated from:
– Tuning
• System should be properly tuned
– Troubleshooting / Diagnostics
• Problems should be diagnosed further to the point
when it is clear how to handle them
– Capacity Planning / Sizing
• "Pure" performance testing is rare
– Regression testing ?
31. Exploratory Testing
• Rather alien for performance testing, but
probably more relevant than for functional
testing
• We learn about system’s performance as we
start to run test
– Only guesses for new systems
• Rather a performance engineering process
bringing the system to the proper state than
just testing
31
35. Load Testing Process
Collect Requirements
Define Load
Run Tests
Analyze Results
Done
Modify System
Goals are met
Goals are not met
35
No integration points
at all !
36. Agile Support
• Agile / CI support becoming the main theme
• Integration with Continuous Integration Servers
– Jenkins, Hudson, etc.
– Several tools announced integration recently
– Making a part of automatic build process
• Automation support
• Easiness to use
• Support of newest technologies
36
37. Automation: Difficulties
• Complicated setups
• Long list of possible issues
• Complex results (no pass/fail)
• Not easy to compare two result sets
• Changing Interfaces
• Tests may be long
38. Automation: Considerations
• You need know system well enough to make
meaningful automation
• If system is new, overheads are too high
– So almost no automation in traditional environments
• If the same system is tested again and again
– It makes sense to invest in setting up automation
• Automated interfaces should be stable enough
– APIs are usually more stable on early stages
39. Tool Support
• Not much tool support was until recently
• Some vendors claimed that their load testing
tool better fits agile processes
– Often it meant that the tool is a little easier to use
• Was difficult to find what is available
– Ability to automate: command line, API, data
access
– Ability to extend scripts
– Supported technologies
40. Tool Support: Recent Developments
• Recently agile support became the main theme
– A lot of new developments
• Integration with Continuous Integration Servers
– Several tools announced integration recently
• Cloud integration
• Support of newest technologies
42. Load Testing Process
Collect Requirements
Define Load
Run Tests
Analyze Results
Done
Modify System
Goals are met
Goals are not met
42
Isn’t so simple
anymore
43. Reporting and Analysis
• Good integrated reporting and analysis greatly
increases efficiency
– Getting all data in one place and synchronized
– Integration of monitoring data is a great help
• Weak spot of many open source tools
43
44. Monitoring
• System level
• Application level (APM)
– AppDynamics, New Relic, Dynatrace, etc.
– Many tools have integration
• Integration allows analyze monitoring data
together with test results
44
45. The Main Change in Monitoring
• Configuration becomes dynamic, changing on
the fly
• Auto scaling, auto provisioning, etc.
– Challenge to monitor all moving parts
– Challenge to compare results of dynamic
configurations
– Shift to application monitoring
45
46. The Main Change in Analysis
• Not only comparison with the requirements
• Many different forms of analysis depending on
the tests
– Adjusting to configuration / type of the test
• Component testing
– Automatic analysis / alerting
• Continuous Integration / Delivery / Deployment
– Input for tuning / optimization / sizing
46
47. Summary
• The industry is rapidly changing – performance
testing should change too
– Fully embrace agile
• Five [new] dimensions introduced by the
changes
– Environment, Load Generation, Testing Approach,
Life-Cycle Integration, Feedback and Analysis
• Good tools help, but there is no best tool – it
depends on your needs
47