This document provides an overview of a presentation on the economic impacts of performance testing practices. It discusses two examples: 1) Ford's Pinto car that had a design flaw causing fuel tanks to explode in rear-end collisions. Analyzing the options of fixing the flaw vs paying fines, fixing it would have cost $137 million but saved much higher future costs. 2) An example project prioritizing what processes to automate for performance testing. Following the Pareto principle, automating the top 20% of processes that generate 80% of the load yields far better results than automating all processes, saving significant time and resources. The presentation emphasizes how taking a long-term view of costs and following
Product development is inherently risky. While lean and agile methods are praised for supporting rapid feedback from customers through experiments and continuous iteration, teams could do a lot better at prioritizing using basic modeling techniques from finance. This talk will focus on quantitative risk modeling when developing new products or services that do not have a well understood product/market fit scenario. Using modeling approaches like Monte Carlo simulations and Cost of Delay scenarios, combined with qualitative tools like the Lean Canvas and Value Dynamics, we will explore how lean innovation teams can bring scientific rigor back into their process.
1. Etsy moved from a waterfall deployment process with long development cycles and infrequent deployments to a continuous deployment model with small code changes deployed frequently by engineers. 2. Continuous deployment allows Etsy to experiment continuously and make small iterative improvements, reducing the risk of outages and allowing issues to be addressed quickly. 3. Etsy now deploys code changes over 25 times per day on average, every day, with the goal of keeping deployments fast and low-risk through techniques like feature flags and extensive monitoring.
This document provides advice on how to introduce new engineering practices and technologies to a team or business. It discusses several examples of proposed new practices and technologies such as test automation, continuous integration, refactoring, and DevOps. For each, it advises how to demonstrate the benefits through examples and metrics, how to gain buy-in from various stakeholders, and pitfalls to avoid such as claiming a practice is necessary just because a famous person recommends it. The overall message is that new practices must provide clear value and be introduced through demonstration and collaboration rather than dictates.
Everyday Project Controls people can spend most of their time looking at the past rather than the future; this presentation challenges this and asks should we not be looking forward more. It shows ways Earned Value can be used to start to predict future performance by reviewing productivity against plan. This is a powerful tool which is applicable across all industries and all phases of projects and is easy to calculate based on data collected by us each month. This presentation challenges this and asks the question why are we not doing this so we can start to look forward and be more proactive in our control of projects? The second part of this presentation looks at bulk material forecasting and challenges.
Conjoint analysis is a technique used to understand how people value different attributes or features of a product or service. It involves showing people combinations of attributes and levels and asking them to choose their preferred options. This allows researchers to model the importance of attributes and understand how people trade off different features. The document provides an overview of conjoint analysis, including what problems it can address, how it works, and key considerations in designing a conjoint study. An example study is presented to illustrate attribute selection, task design, and the type of outputs generated, including part-worth utilities and importance scores.
Craig Sullivan, well known conversion consultant from the UK. Slides from his saturday keynote at Conversion Hotel 2014 #CH2014 #enjoy
This document summarizes the story of quintly, a social media analytics SaaS company founded in 2010 by two technical co-founders in Germany. It details their journey from a small prototype in 2010 with $20k in ARR to growing their team to 34 members with $2.5m in ARR by 2017. Key learnings included bootstrapping the business, doing early sales themselves, focusing on revenue over investors, keeping the business family-owned, and constantly innovating the product. The co-founders learned the importance of company culture, standardized processes, and not being afraid to fail along the way.
In this session, we explain how to mine GA for broken device experiences, flows, funnel blocks and more... Using a new grid tool we've developed, you can pull multi-dimensional segmented funnel and metric data from Google Analytics - we explain how it works, why you need it and what problems it solves. Find where your site is leaking money through data
In this talk, I show the key shortcuts to stop doing stupid testing and move towards innovative and transformative design & build methodologies, including innovation through split testing exploration
The document discusses common mistakes made in A/B testing and provides advice to avoid false or misleading results. It recommends integrating analytics to properly track and segment test results, running tests for sufficient time periods that include full business cycles to avoid false positives or negatives, and performing thorough quality assurance testing to prevent browser or device-related issues from influencing outcomes. The key is to design hypotheses based on solid customer insights and data rather than guesses, and continue testing until a representative sample is collected rather than stopping early just because a test appears significant.
How To Save $585,000,000 in 2015! Spending $2,250,000.00 on pavement management and doing really eco friendly recycling and preservation treatments
DevDay (http://devday.pl), 20th of September 2013, Kraków Video at http://www.youtube.com/watch?v=L4eTOvq2WmM&feature=c4-overview-vl&list=PLBMFXMTB7U74NdDghygvBaDcp67owVUUF
This document discusses using Python for startups and minimum viable products. It explains that startups aim for minimum viable products and continuous change rather than long-term plans. The Lean Startup methodology advocates building products iteratively based on customer feedback and pivoting if needed. Python is well-suited for startups because it allows for fast development and integration with external services commonly used by startups.
- The document describes challenges faced by an organization in achieving enterprise agility, including a lack of alignment on objectives and approach between stakeholders like the product owner, delivery manager, and client. - Meetings are held to align stakeholders on clear objectives like delivering a demo-able product incrementally each sprint, rather than 30 features. The delivery manager proposes focused investments to improve automation and quality. - For the next sprint, the product owner and delivery manager agree to run it based on delivering a portion of the demo-able product incrementally while automating all accepted items to prevent regressions. Lessons highlight the importance of alignment on objectives and investments over lectures on mindset.
This document discusses how pair programming can strengthen teams. It provides an overview of pair programming, highlighting benefits like higher code quality, more maintainable code, and increased job satisfaction. It also addresses potential challenges with pairing and provides tips for how to successfully implement pairing, including soliciting willing participants, coordinating schedules, and ensuring proper infrastructure.
This document provides an overview of the Toyota Production System (TPS). It discusses that TPS aims to continuously improve by removing waste, and that being "lean" is a never-ending journey of improvement. Key aspects of TPS discussed include its focus on flow, pull systems, respect for people, standardized work, visual management, and the "14 principles" that guide Toyota's long-term philosophy. The document highlights benefits of approaches like one-piece flow and how Toyota develops people and measures success holistically across factors like quality, cost and safety.
The document discusses Toyota's approach to one piece flow and problem solving. It describes how Toyota implements one piece flow throughout the entire business process improvement cycle and values delivering benefits in small, testable pieces. Toyota also takes a three-level approach to problem solving to thoroughly investigate issues, set challenging targets, and implement countermeasures to continuously improve.
The document discusses continuous performance optimization of Java applications. It proposes adding an optimization step to the continuous integration pipeline to evolve performance testing beyond just finding regressions. This would allow configurations to be adapted to new application features and releases to find performance improvements. The approach is demonstrated on a flight search microservice, where different garbage collection algorithms and configuration parameters are evaluated to optimize throughput, response times, memory usage and stability under increasing load.
The document discusses the concept of observability in performance engineering and its importance for understanding application performance. It defines observability as watching application behavior using response metrics and resource utilization metrics to understand the digital user experience. The document provides examples of integrating load testing tools with application performance monitoring tools to actively monitor applications in production and observe performance across releases. It emphasizes the need to analyze raw metrics from multiple perspectives to gain useful insights.
This document discusses measuring and addressing CPU throttling in containerized environments. It describes how CPU limits work at the kernel level and how to measure throttling using cgroup metrics. The author proposes adding 1.3 times the maximum throttled CPUs to the container's CPU limit to eliminate throttling. A case study shows this approach reduced response times and garbage collection pauses. The document also discusses how the JVM can increase demand as CPU limits increase and the importance of tailoring limits to workloads.
This document discusses using Keptn to automate service level indicator (SLI) evaluation and performance validation with service level objectives (SLOs). It describes two use cases: 1) automating SLI evaluation over a timeframe, and 2) integrating performance validation as a self-service capability. The document outlines how Keptn works underneath, including defining SLIs and SLOs in YAML and scoring SLIs against SLO criteria. It demonstrates integrating Keptn with existing pipelines and monitoring tools. Finally, it discusses options for installing only the Keptn quality gate functionality or the full Keptn platform.
The document discusses how to automate performance testing in DevOps. It outlines an automated analysis workflow involving defining metrics, comparing metrics to thresholds and baselines, pattern analysis, and test results. It also discusses script automation, reducing false positives, and integrating different types of performance tests like load, stress, and spike tests. The goal is to automate performance testing to support the rapid delivery cycles of DevOps.
This document discusses monitoring the performance of Azure data pipelines. It recommends using Azure Log Analytics to monitor pipelines near real-time, as traditional testing tools don't support log analytics. It provides steps to configure a Log Analytics workspace and enable diagnostics logging. Sample queries are shown to monitor metrics like pipeline durations, activity times and throughput. Monitoring pipelines helps assess performance, stability and confidence in data processing.
This document discusses common performance myths and challenges related to cloud migrations, SaaS deployments, and cloud native applications. It provides approaches to assess performance for each scenario. Some key points covered include: - Common myths like unlimited scaling without effort, cost savings from cloud alone, and SaaS vendors handling all performance. - Challenges for migrations include capacity planning, auto-scaling issues, and latency. For SaaS, risks include customization impacts and integration issues. Cloud native challenges are distributed architectures and configuration problems. - The document recommends approaches like baseline benchmarking, resilience testing, and 360-degree validation across user, cloud, data center and external layers to overcome myths and address challenges.
Joerek van Gaalen discussed his experience conducting large-scale performance tests, including a test with 2 million virtual users. He explained how to simulate that level of load using 800 load generators across different cloud providers. He emphasized optimizing tests by tuning scripts, controllers, and agents to minimize resources. Van Gaalen also stressed the importance of making tests realistic by mimicking production usage patterns and balancing load. Specific issues like CDN performance limitations and uneven load distribution were also addressed.
Ankur Jain presented on using the User Timing API to measure user perceived performance of single-page applications without an APM tool. The User Timing API allows developers to mark milestones and measure the time between them to understand performance. While it requires code changes, it provides accurate, real-user monitoring of applications across browsers. Some limitations are that it requires knowledge of the application's user flow and code access to implement the markers.
The document summarizes a presentation given at the Performance Advisory Council in Santorini, Greece in February 2020. The presentation advocated for using cloud-based performance engineering tools for their ease of use, ability to automatically correlate data, and to scale testing on demand. It cautioned that adopting new tools requires maintaining the same performance testing culture to avoid generating misleading or inaccurate results.
The document discusses the flawed approach of "racing performance" without proper preparation. It argues that jumping straight into performance testing without checking components like tires, fuel levels, and ensuring the system is functioning properly leads to "fatal consequences" like crashes. Some reasons this approach still occurs include a lack of performance culture, confusion between performance and load testing, and viewing all problems as requiring load automation. The document promotes better approaches like verifying components, defining standards, stress testing parts individually and together before integration, using meters to monitor system functions, conducting lab tests, and test driving the system before attempting high-stakes races.
This document discusses automating performance testing pipelines. It covers value stream mapping to identify tasks that can be removed, simplified or automated. Automating testing provides benefits like reduced time, allowing specialists to focus on higher-value work, and empowering others to run tests. The document demonstrates automating a JMeter load test, providing tips like using JMeter projects and scripts. It notes that significant time savings are possible from automating not just test execution but test development as well through techniques like UI automation.
The document discusses adding performance verifications to continuous delivery pipelines. It notes that the typical approach to performance testing does not work well for continuous delivery, as test scripts are fragile and it is difficult to identify the cause of issues. The document recommends taking a different approach with the goal of detecting any degradations as early as possible. It suggests implementing unit performance tests to test endpoints in isolation and detect degradations immediately after they are introduced. While unit tests are important, integration and load tests are still needed periodically to test how components interact as a whole. Client-side performance also needs to be considered.