Presentation from the October 2011 meeting of the Northern Virginia Test Automation Interest Group on testing web performance.
This document discusses how web design firms can compete with internal GIS teams by providing web-based GIS (WebGIS) applications. It notes that WebGIS requires learning new tools like JavaScript, AJAX, and RESTful services. To protect their work, internal GIS teams need to learn these new web technologies and prioritize usability over features to create responsive applications. The document advocates for an iterative development process with a focus on performance and usability testing.
The document discusses performance testing plans for a website. It proposes using synthetic testing from 14 global locations on representative pages every 5 minutes. A new plan tests from last-mile locations on desktop and mobile with 20 daily samples. Custom timing marks will measure user experience, sent to analytics. Synthetic testing will also run in continuous integration to catch performance regressions early.
This document discusses mocking REST APIs with Wiremock. It provides an overview of why and when APIs are mocked, how to install and use Wiremock to mock APIs, and demos of running Wiremock both standalone and integrated with test code. Key points covered include simulating unavailable or error-prone services for testing, installing Wiremock via Maven or Gradle, configuring Wiremock stubs and mappings, and viewing mocked request histories and mappings in the Wiremock admin interface.
This document provides an overview of installing and using ASP.NET AJAX including key controls like the ScriptManager, UpdatePanel, UpdateProgress, and Timer. It discusses the ASP.NET AJAX architecture, client life-cycle events, extending JavaScript, debugging techniques, and using web services.
This document discusses client-side performance optimizations for websites. It begins by explaining how client-side loading accounts for 80-90% of total page load time on average. It then provides an overview of tools for analyzing performance bottlenecks. The document outlines several basic optimization techniques, including reducing HTTP requests, leveraging browser caching through headers and cache busters, optimizing images, prioritizing critical resources, and improving JavaScript and CSS performance. It emphasizes the importance of measuring performance before and after making changes.
Quickling and PageCache are two software abstractions at Facebook that improve front-end performance. Quickling makes the site faster by using AJAX to transparently load pages without reloading common elements. PageCache caches user-visited pages in the browser to improve latency and reduce server load when pages are revisited. Both have significantly reduced Facebook's page rendering times and improved the user experience.
Understanding what happens on the client side is not easy. When you user visits your website you need to check his location, his device, connection speed, browser, and what page he is visiting. After gathering all this data, you also need to check what happened. How long it takes for him to see the page? How long it takes until the page is fully loaded and working? If there was a JS error what was it and why can’t you replicate it? Most of the users don’t have powerful machines, with fast-connections. In this talk we will analyze the tools you can use to profile the client, synthetic and RUM analysis and how you can improve the performance on the client side. Basic and more advanced tips with real examples.
This document discusses using SpecFlow and WatiN for web automation testing with a behavior-driven development (BDD) approach. It provides an overview of BDD, demonstrates SpecFlow features like step arguments and scenario outlines, and recommends BDD patterns like page object model and driver pattern. The framework emphasizes clear specifications over tools, and supports collaboration between technical and non-technical teams through its use of plain language to describe features and scenarios.
Behavior Driven Development (BDD) focuses on defining expected application behaviors through user stories. Cucumber and Capybara are tools that support BDD. Cucumber allows writing tests in plain language and organizing them into feature files. Capybara is a framework that simulates user interactions and uses a domain-specific language to write tests. It supports drivers like Selenium to test web applications with JavaScript.
This document discusses behavior-driven development (BDD) and automation testing using Cucumber. It begins with an example of a Cucumber scenario for logging into a system. It then demonstrates an automation test case written in Java and discusses how Cucumber executes scenarios. The rest of the document outlines an agenda to discuss BDD, Cucumber automation, developing a Cucumber framework, and the pros and cons of BDD and Cucumber.
This document discusses various methods for automating front-end optimization. It describes how HTML rewriting solutions can optimize HTML through proxies or in-app plugins. It also discusses when certain optimizations are best done by machines versus humans. The document outlines different architectures for front-end optimization solutions, including cloud-based and on-premises options, and considers when each is most appropriate. It emphasizes the importance of testing solutions before deploying and of monitoring performance after deployment.
An approach to capturing and integrating web client Real User Measurements from the Navigation object with server-side network and HttpServer diagnostic events.
This document discusses testing web services. It defines what web services are and why they are used. It provides examples of web services and how they allow the same functionality to be accessed across different platforms and user interfaces. The document discusses REST, requests, responses, and status codes. It demonstrates how to test web services manually in a browser and with Postman, cURL, and automated testing libraries in Java and Python. The key takeaways are an understanding of web services, how to test them in multiple ways, and increased familiarity with test automation.
Presentation from the June 28, 2011 National Capital Area Google Technology Users Group on some of Google's efforts to make the web faster.
The document discusses using queues to improve the scalability of PHP applications. It describes how queues allow asynchronous and distributed processing of tasks to improve performance and allow applications to handle more traffic. Specifically, it promotes using Zend Server's job queue to offload long-running tasks like payments processing so the frontend can scale independently of backend processing. Examples show building jobs that communicate with the queue to asynchronously execute tasks like payments.
This document provides an overview of ASP.NET AJAX with Visual Studio 2008, including: 1) Benefits of using ASP.NET AJAX such as asynchronous JavaScript calls that reduce page loads and improve the user experience. 2) Key concepts of ASP.NET AJAX including UpdatePanels, triggers, and client-side JavaScript libraries. 3) Differences between client-centric and server-centric programming models in ASP.NET AJAX.
Do you have a website? Do you have any tests for that site? Even if you have unit tests integration tests can help you target workflows such as a checkout process. In this presentation I will talk about testing any site with Cucumber and Selenium. I will show what the tests look like, and explain the different ways to run these tests, from running them locally, building your own selenium grid to using Sauce labs as your testing infrastructure.
The document discusses various ways to test the quality and functionality of web applications. It describes testing content, structure, navigation, interfaces, performance, security, compatibility with different configurations, and usability. The goals of testing are to uncover errors in content, functionality, design, and user experience. A variety of techniques are proposed to thoroughly test the various components and aspects of a web application.
The document discusses techniques for performance testing web applications, including staging the testing environment, building test assets, running tests, and analyzing metrics. It describes deploying a testbed, eliminating deployment issues, analyzing client data to develop test scenarios, executing manual and automated tests, and gathering metrics on system performance, databases, and applications. The goal is to identify potential performance issues before load testing at higher user volumes expected by clients.
Are you new to performance testing? This slides are for those of you who want to explore and learn where and how to start testing application performance. During this web event, our performance testing experts will reveal the key pieces and parts of performance testing, including the phases of the test and how HP LoadRunner supports each phase.
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
This presentation explains about how to start performance testing. Visit www.QAInsights.com for more such articles.
The document summarizes a training session on performance testing using LoadRunner. It discusses planning load tests, the components of LoadRunner, creating scripts and scenarios, and enhancing scripts. Key points covered include the purpose of different types of tests, goals for performance testing, the workflow of a load test using LoadRunner, and developing scripts using Virtual User Generator.
The document discusses performance testing, including its goals, importance, types, prerequisites, management approaches, testing cycle, activities, common issues, typical fixes, challenges, and best practices. The key types of performance testing are load, stress, soak/endurance, volume/spike, scalability, and configuration testing. Performance testing aims to assess production readiness, compare platforms/configurations, evaluate against criteria, and discover poor performance. It is important for meeting user expectations and avoiding lost revenue.
The document provides an introduction and overview of performance testing. It discusses what performance testing, tuning, and engineering are and why they are important. It outlines the typical performance test cycle and common types of performance tests. Finally, it discusses some myths about performance testing and gives an overview of common performance testing tools and architectures.
This document provides an overview of performance and load testing basics. It defines key terms like throughput, response time, and tuning. It explains the difference between performance, load, and stress testing. Performance testing is done to evaluate system speed, throughput, and utilization in comparison to other versions or products. Load testing exercises the system under heavy loads to identify problems, while stress testing tries to break the system. Performance testing should occur during design, development, and deployment phases to ensure system meets expectations under load. Key transactions like high frequency, mission critical, read, and update transactions should be tested. The testing process involves planning, recording test scripts, modifying scripts, executing tests, monitoring tests, and analyzing results.
The document discusses performance optimization at InfoJobs, describing how they use Scrum for development across 6 teams, monitor real user experience (RUX) to track performance in production, and how the QA team performs load testing to validate performance before new releases go live while also generating comparison reports on metrics like page load times and slowest pages.
The document provides a short history of performance engineering, beginning in the 1960s with the introduction of instrumentation tools for mainframe systems and the first studies of human response times. Key developments include the establishment of the performance engineering community in the 1970s, the first commercial performance analysis tools and distributed computing in the late 1970s, and the publication of early books on software performance engineering and applying existing expertise to web performance in the 1990s. The history shows that performance has been an ongoing concern across different computing paradigms, with new challenges arising with each new technology.
This document provides an overview of using LoadRunner to perform load and performance testing. It covers topics such as why performance testing is important, definitions of different types of testing, benchmark design, LoadRunner components, the load testing process, building scripts using the Virtual User Generator, playing back scripts, solving common issues, preparing scripts for load testing, creating load testing scenarios in the LoadRunner Controller, running load tests, and analyzing results.
The document discusses microservice performance. It recommends measuring performance correctly by recording timestamped requests with latency and success/failure data. Latency distributions have heavy tails so percentiles are important to understand. Throughput and latency are related by Little's Law. Latency stacks across services so simulation tools are useful. Amdahl's Law and Universal Scalability Law can help identify optimization targets and forecast scalability. The key is to measure performance correctly to identify potential issues and optimize the right parts of the system.
The ability to fully automate your results analysis is vital in today’s Continuous Integration and Continuous Deployment era. Agile practices like Microservices exacerbate this need, giving you tens of services to test, ten times a day! Analysis by the human eye is impossible - you need automation. But automating the analysis is extremely difficult due to the ever-increasing complexities of load testing processes and timeline based reports. As such, contemporary load testing tools and services offer excellent ways to present reports but fall short when it comes to the analysis. In this presentation, Andrey Pokhilko (founder of JMeter-plugins.org and Loadosophia) explores how to take automatic result analysis and decision making to a new level. Join us and discuss: • Why is it tough to fully automate analysis and decision making on test results • How humans analyze the test in practice - which KPIs they look at • Which decisions can be made automatically during test execution • Which facts can be automatically concluded post-test • Practical results from several months of method application
Loadrunner is a flagship load testing product from HP that commands over 70% of the market share. It can simulate thousands of users accessing a website or application simultaneously to test performance under heavy load. Loadrunner uses a 3-tier architecture with load generators that simulate users, a controller to manage the test, and monitoring tools to analyze performance. It supports testing many common protocols and can test websites, applications, databases, and other systems.
Carles Roch-Cunill, Test Lead for System Performance at McKesson Medical Imaging Group, shared his expertise in Analyzing Performance Test Data.