Randy Rice presented on lessons learned from user acceptance testing (UAT) on four different projects. The first project involved a new laboratory testing system that had severe performance issues and required three redeployments. The second project with the same company was more successful due to improved testing practices. The third project involved designing many tests based on business scenarios before the system's interface was known. The last project involved a complex legal system where system testing found most defects and UAT involved a simplified walkthrough. Key lessons included not relying solely on UAT, implementing incrementally, and adjusting UAT plans as more is learned.
When Agile is a Quality Game Changer Webinar - Michael Mah, Philip LewXBOSoft
Accelerate your Agile success with in-depth research and smarter decisions. Michael Mah of QSM Associates shows you what it takes to find and utilize patterns of successful Agile development in this quarterly XBOSoft webinar.
Modern release management teams pride themselves on setting up a seamless workflow for continuous integration and delivery. However, continuous testing – which is one of the most critical components of the workflow is often taken for granted or marginalized without clear ownership leading to impediments in quality. With the advent of DevOps and the movement to break down silos between developers and operations, it becomes critically important that all members of an IT team - regardless of what tools they use, or role they play - understand the essentials of continuous testing.
In this chapter, we will introduce you to the fundamentals of testing: why testing is needed; its limitations, objectives and purpose; the principles behind testing; the process that testers follow; and some of the psychological factors that testers must consider in their work. By reading this chapter you'll gain an understanding of the fundamentals of testing and be able to describe those fundamentals.
For a beginner, this is a good quality pictorial representation of DevOps and DevOps Center of Excellence.
Opex Software focuses on consulting, implementation and development of DevOps tools and platforms. Have helped small and large data centers! This presentation talks about Continuous Integration, Continuous Delivery at a high level. For detailed presentations and flows, please ping us.
Thanks again, Enjoy!
This document provides an overview of DevOps concepts and adoption. It discusses adopting DevOps through a focus on people, processes, and technology. It outlines implementing continuous delivery pipelines and integrating systems of engagement with systems of record. The document proposes applying Lean principles to software delivery to create continuous feedback loops with customers.
Everybody loves a good love story. And even more so one that mixes in pop stars and the music business! If you have an interest in hearing about how the benefits of DevOps can help unblock the delivery of IT innovation in your business then you’ll want to hear this story.
DOES14 - Stephen Elliot - IDC - Delivering DevOps Business Metrics that MatterGene Kim
Stephen Elliot, VP of IDC
DevOps is the modern way to deploy new IT capabilities that drive and deliver business results. This session will dive into the key metrics that large companies are using to gauge the success and measure results utilizing the DevOps discipline. The session will answer the following questions:
What are some of the key technology and business metrics that large organizations are using to measure and manage DevOps projects?
What are the critical success factors required when communicating with the business on Devops delivered projects?
What role do the security and compliance teams play in DevOps, and related metrics?
This document discusses continuous testing in an agile environment. It defines continuous testing as testing throughout the development process to identify bugs early. It explains that continuous testing helps control side effects, avoid defects, support multiple environments, get fast results, anticipate risks, and create reliable processes. The document provides an overview of how continuous testing works, including test environments, data management, automatic deployment, and test automation. It also discusses creating a continuous testing project, the agile test process, and how to implement effective continuous testing to improve quality and business value.
Put Agile to the Test: A Case Study for Test Agility on a Large IT ProjectTechWell
Agile practices, although applicable to a variety of situations, are most commonly applied to IT projects, generally for software development. Can you apply agile methods to just part of a software implementation project? Todd Jones presents this case study where agile techniques were applied to the testing phase of a multiyear, multimillion-dollar IT program that included replacing a legacy system, new software development, creation of a new enterprise data model and document management solution, and complex financial balancing. After briefly describing the challenges faced by the organization, Todd covers each guiding principle from the Agile Manifesto, describes the practical approach he used to implement agile techniques, and shows information radiators that were used to track velocity and demonstrate progress to program stakeholders. Todd outlines the key benefits this brought to the business as well as lessons the team learned along the way. Leave with a greater appreciation for the testing complexity of a large-scale software implementation, a better understanding of agile practices and lessons learned, and practical techniques and tracking tools you can apply to your current project.
The document discusses continuous delivery and identifies several common antipatterns and issues:
1) Deploying software manually without automation leads to unpredictable releases and difficulties.
2) Deploying only after development is complete misses opportunities for early testing in production-like environments.
3) Manual configuration management of production environments results in failures and differences between environments.
The document proposes automating processes, keeping everything in version control, and adopting a deployment pipeline to improve feedback and enable frequent, reliable releases. Blue-green deployments and canary releasing are also introduced to reduce risk when deploying new versions. Finally, common issues like infrequent deployments, poor application quality, poorly managed continuous integration, and poor configuration management are
This document discusses DevOps and continuous testing. It begins with defining DevOps as a process that increases communication between development and operations teams to automate and speed up software delivery. It then covers the benefits of DevOps like faster release cycles and time to market. Several case studies are presented showing how companies used DevOps and continuous testing to reduce testing time, increase coverage, and lower costs. The document concludes with a demo and opportunities for questions.
Defect analysis and prevention methods deep sharma
The document discusses defect analysis and prevention. It defines key terms like errors, defects, and failures. It describes the defect analysis procedure which includes forming a causal analysis team to identify root causes of defects so they can be prevented. The team proposes actions, while an action team implements solutions. Data on defect types and trends is analyzed to prioritize issues. Tools like fishbone diagrams may be used to sort contributing factors. The goal is to systematically eliminate common causes of defects.
Root Cause Analysis - methods and best practiceMedgate Inc.
A critical part of any safety management system comes after incidents occur. Effective incident investigation including root cause analysis can provide many answers for your organization regarding why an incident or event has occurred. Even if your safety department excels at completing investigations and undertaking corrective actions, your SMS will not be effective if you fail to identify root causes quickly and accurately.
Safety teams that make Root Cause Analysis central to their day-to-day activities will significantly improve their ability to better the safety of the workplace and ensure that incidents do no reoccur.
In these slides, Medgate Safety expert Shannon Crinklaw discusses Root Cause Analysis, outlining its potential impact, covering different analysis methodologies and outlining best practices.
To view the accompanying webinar, go to http://bit.ly/X518oY where you will learn:
What type of incidents are most common.
Mistakes that organizations should avoid when carrying out root cause analysis.
Different models of root cause analysis, such as Five Why and Cause-and-Effect diagrams.
The long term benefits of root cause analysis efforts.
Technical debt is a metaphor used to describe problems caused by taking shortcuts or choosing expedient solutions over proper solutions to implement features or fix bugs. It is similar to financial debt in that it incurs ongoing costs over time. Symptoms of technical debt include decreased team speed, missed deadlines, defects, and stress. Technical debt can be inadvertent from inexperience or deliberate to meet deadlines but must be managed to avoid spiraling costs. The goal is to consciously manage technical debt through short, medium, and long term action plans to continuously improve while paying down the interest so debt does not grow uncontrolled.
TestPRO is an independent testing service provider that can fulfill the majority of the test delivery work that can be carried out on-site and deliver the cost saving that only a dedicated test center can provide. We will prepare and execute the tests and reporting all results to you in a timely manner.
Are you continually testing software the same old way? Do you need fresh ideas? Are your hum-drum tests not finding enough defects? Are your tests too slow for today’s fast-paced lifecycles? Then this workshop will help you spice things up, improve your testing, and get things done. Rob Sabourin outlines more than 150 different ways to test your software to quickly and efficiently expose relevant problems. Each is illustrated with custom artwork and explained with real world examples. Testing is examined from several perspectives—agile and otherwise. What objectives should our testing focus on? How can we design powerful tests? When does it make sense to explore different risks? Can tests be reused, repurposed, or recycled? How does automation fit in? When does checking make sense? Which static techniques are available? What about non-functional testing? “Test evangelist” Rob provides a lively, entertaining, and informative view of software testing.
7 Practices to Expand Performance and Effective Collaboration in DevOpsDynatrace
When apps fail, whose fault is it? In today’s DevOps world, every stakeholder in the app delivery chain is accountable for various aspects of performance, scalability, and availability.
Mark Tomlinson, performance engineering veteran and founder of the popular PerfBytes podcast, and Andreas Grabner, Dynatrace performance advocate, share seven practices to help you expand performance and effective collaboration into your DevOps team, including:
• Why DevOps means you need to check your ego at the door
• What metrics each role across teams can focus on to build quality and performance
• How to use, measure and report these metrics
• What performance means for different stakeholders and the resources required
• Examples of how increased collaboration and responsibility can improve performance
Continuous testing & devops with @petemar5hallPeter Marshall
This document discusses testing software in high frequency delivery environments using continuous testing and DevOps practices. It outlines how continuous testing is not just about test automation, but also includes automated management of environments, application feedback through monitoring, and engaging in XP practices. DevOps helps by automating building, testing, and deployment to provide consistency and tools for teams. Characteristics of high frequency delivery environments include automating infrastructure, testing, and deployment to reduce errors and allow for smaller, more frequent deliveries. This allows for a single view of quality and faster restore times when issues arise.
User Acceptance Testing: Make the User a Part of the TeamTechWell
Adding user acceptance testing (UAT) to your testing lifecycle can increase the probability of finding defects before software is released. The challenge is to fully engage users and assist them in becoming effective testers. Help achieve this goal by involving users early and setting realistic expectations. Showing how users add value and taking them through the UAT process strengthens their ability and commitment. Conducting user acceptance testing sessions as software functionality becomes available helps to build confidence and capability—and find defects earlier. Susan Bradley shares a five-step process that you can use in your organization to conduct user acceptance testing. Learn to conduct training, set up daily testing expectations, assign test cases to users, create a shared information site for both test case management and feedback documentation, conduct a review of noted issues with all interested parties, and participate in a retrospective regarding the UAT process to improve the process for next time.
Designing for Testability: Differentiator in a Competitive MarketTechWell
In today’s cost conscious marketplace, solution providers gain advantage over competitors when they deliver measurable benefits to customers and partners. Systems of even small scope often involve distributed hardware/software elements with varying execution parameters. Testing organizations often deal with a complex set of testing scenarios, increased risk for regression defects, and competing demands on limited system resources for a continuous comprehensive test program. Learn how designing a testable system architecture addresses these challenges. David Campbell offers practical guidance on the process to make testability a key discriminator from the earliest phases of product definition and design. Learn approaches that consistently deliver for high achieving organizations, and how these approaches impact schedule and architecture performance. Gain insight on how to select and customize techniques that are appropriate for your organization’s size, culture, and market.
Congruent Coaching: An Interactive ExplorationTechWell
We have opportunities to coach people all the time. Much of what we see as coaching is actually undercover training. Real coaching is richer—offering support while explaining options. In this interactive session, Johanna Rothman invites you to explore how to coach, regardless of your position in the organization. Teaching is just one option for coaching. You have many other options, depending on your coaching stance. You may select a counselor’s stance if you are managing up or a partner’s stance if you are a peer. You might even select a reflective observer’s stance or a technical advisor’s stance, depending on the situation. We will explore what to do when you see opportunities for coaching but you haven’t been asked to coach. Bring your coaching concerns, whether you are coaching onsite, or coaching at a distance, coaching one-on-one, or coaching teams. Let’s learn and build our coaching skills together.
It’s one week after your product’s launch, and everyone is happy. After all, for the first time in years, your product development exceeded expectations. Coding was completed on time with very few defects. Suddenly, the report of a major usability and security flaw destroys the euphoria and sends everything into chaos. Unfortunately, this is not uncommon in our industry. So, how can we mitigate such things from happening? As he shares stories about the complex domain of product delivery, Ray Arell introduces a framework with associated emergent practices that enable you to better guide your product to success. He presents an overview of the Cynefin model, a description of complicated and complex systems, and discusses how to use it to establish an effective testing strategy. Ray describes how to identify key patterns of product usage to establish a robust defect-prevention system that reduces product development costs. Lastly, Ray describes how to interview customers to identify key quality expectations, ensuring that your testing focuses on producing the highest value for your customers.
CAN I USE THIS?—A Mnemonic for Usability TestingTechWell
Often, usability testing does not receive the attention it deserves. A common argument is that usability issues are merely “training issues” and can be dealt with through the product's training or user manuals. If your product is only for internal staff use, this may be a valid response. However, the market now demands easy-to-use products—whether your users are internal or external. David Greenlees shares a tool he has developed to generate test ideas for usability testing. His mnemonic—CAN I USE THIS?—provides a solid starting point for testing any product. C for Comparable Product, A for Accessibility, N for Navigation … David shares how he has used this mnemonic on past projects while the training argument took place around him, and how they realized product improvements and greater user acceptance. Learn how you can quickly and effectively use this mnemonic on any project so you can give usability testing the attention it deserves.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Test reporting is something few testers take time to practice. Nevertheless, it's a fundamental skill—vital for your professional credibility and your own self management. Many people think management judges testing by bugs found or test cases executed. Actually, testing is judged by the story it tells. If your story sounds good, you win. A test report is the story of your testing. It begins as the story we tell ourselves, each moment we are testing, about what we are doing and why. We use the test story within our own minds, to guide our work. James Bach explores the skill of test reporting and examines some of the many different forms a test report might take. As in other areas of testing, context drives good reporting. Sometimes we make an oral report; occasionally we need to write it down. Join James for an in-depth look at the art of the reporting.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. James Bach shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join James for this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs.
Improving the Mobile Application User Experience (UX)TechWell
If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of usability and UX, discusses how they are related, and then examines their importance for today’s mobile applications. After a brief discussion of how the meanings of usability and user experience depend on the context of your product, Phil defines measurements of usability and user experience that you can use right away to quantify these subjective attributes. He crystallizes abstract definitions into concepts that can be measured, with metrics to evaluate and improve your product, and provide numerous examples to demonstrate the concepts on how to improve your mobile app.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
A Guide to Cross-Browser Functional TestingvTechWell
The term “cross-browser functional testing” usually means some variation of automated or manual testing of a web-based application on different mobile or desktop browsers. The aim of the testing might be to ensure that the application under test behaves or looks the same way on different browsers. Another meaning could be to verify that the application works with two or more browsers simultaneously. Malcolm Isaacs examines these different interpretations of cross-browser functional testing and clarifies what each means in practice. Malcolm explains some of the many challenges of writing and executing portable and maintainable automated test scripts which are at the heart of cross-browser testing. Learn some practical approaches to overcome these challenges, and take back manual and automated testing techniques to validate the consistency and accuracy of your applications—whatever browser they run in.
Testing in the Wild: Practices for Testing Beyond the LabTechWell
The stakes in the mobile app marketplace are very high, with thousands of apps vying for the limited space on users’ mobile devices. Organizations must ensure that their apps work as intended from day one and to do that must implement a successful mobile testing strategy leveraging in-the-wild testing. Matt Johnston describes how to create and implement a tailored in-the-wild testing strategy to boost app success and improve user experience. Matt provides strategies, tips, and real-world examples and advice on topics ranging from fragmentation issues, to the different problems inherent in web and mobile apps, to deciding what devices you must test vs. those you should test. After hearing real-world examples of how testing in the wild affects app quality, leave with an understanding of and actionable information about how to launch apps that perform as intended in the hands of end-users—from day one.
Extreme Automation: Software Quality for the Next Generation EnterpriseTechWell
Software runs the business. The modern testing organization aspires to be a change agent and an inspiration for quality throughout the entire lifecycle. To be a change agent, the testing organization must have the right people and skill sets, the right processes in place to ensure proper governance, and the right technology to aid in the delivery of software in support of the business line. Traditionally, testing organizations have focused on the people and process aspect of solving quality issues. With the ever-increasing complexity of the software needed to run the enterprise, testing professionals must adopt technology to help solve some of the most challenging quality issues ever. In short, testing organizations must make the move to extreme automation and become proficient with modern tooling and its benefits. Theresa Lanowitz focuses on new and emerging technologies—proven and successful—to add to the workbench of the test professional.
During the past decade, test engineers have become experts in browser compatibility testing. Just when we thought everything was under control, along come native mobile applications that need to run across platforms far more diverse than the desktop browser landscape has ever been. The variety of OSs, screen sizes, and hardware technology combine to create hundreds of configurations that need some testing. Manual testing across so many deployment targets will drive anyone crazy. Stu Stern looks at the biggest challenges in mobile testing: functional, platform, display, and device compatibility testing and explores how you can use MonkeyTalk, a free open source tool to create test suites that can be easily run across today’s menagerie of mobile devices. MonkeyTalk can help you automate functional interactive tests for native, mobile, and hybrid iOS and Android apps—everything from simple "smoke tests" to sophisticated data-driven test suites.
Today’s test organizations often have sizable investments in test automation. Unfortunately, running and maintaining these test suites represents another sizable investment. All too often this hard work is abandoned and teams revert to a more costly, but familiar, manual approach. Jared Richardson says a more practical solution is to integrate test automation suites with continuous integration (CI). A CI system monitors your source code and compiles the system after every change. Once the build is complete, test suites are automatically run. This approach of ongoing test execution provides your developers rapid feedback and keeps your tests in constant use. It also frees up your testers for more involved exploratory testing. Jared shows how to set up an open source continuous integration tool and explains the best way to introduce this technique to your developers and testers. The concepts are simple when presented properly and provide solid benefits to all areas of an organization.
This is a case study on conducting User Acceptance Testing (UAT) of a complex B2E software application. Involved testing of several critical HR and Payroll modules.
Strategy vs. Tactical Testing: Actions for Today, Plans for TomorrowEggplant
In his STAREAST Virtual+ presentation, Chuck Schneider from Cerner Corporation shared his 6 pillars for strategic planning in testing and offered guidance to navigate the necessary pivot towards tactical execution when faced with a survival situation. Chuck provided a clear, 4-step guide on how to quickly develop and implement a tactical testing plan to avoid the pitfalls of a delayed response. In this presentation you will discover how to harness your strengths, achieve focus, and deliver results in times of incredible change.
The document discusses various stages of testing in the software development lifecycle according to the V-model. It describes component testing as the lowest level of testing done in isolation on individual software modules. An overview of the component testing process is provided, including planning, specification, execution, recording, and completion checking stages. Black box and white box test design techniques for specifying test cases at the component level are also outlined.
- Automating performance tests through continuous integration can provide direct feedback on performance changes after code releases and infrastructure changes. It allows performance issues to be detected and addressed earlier.
- Key best practices include starting with a single important test scenario, focusing on robustness over realism, visualizing trend data over time, and analyzing results to update thresholds and catch regressions.
- The goal is to continuously monitor performance through the pipeline and in production to better understand impacts of changes and flag any performance issues for further investigation. Automated tests complement but do not replace thorough acceptance testing.
This document discusses acceptance testing, which is formal testing conducted by end users to determine if a system meets requirements and business processes before it is accepted. The document outlines what acceptance testing is, different types including user acceptance testing and operational acceptance testing, common application areas, how it fits into software development lifecycles, challenges, and guidelines for success. It also briefly discusses outsourcing acceptance testing.
DevOpsDays Houston 2019 - Lee Barnes - Effective Test Automation in DevOps - ...DevOpsDays Houston
High performing DevOps teams point to effective test automation as a key to their success. This talk delivers key automation practices required to assess the risk of moving their builds through the pipeline - including balancing test scope & risk, test env/data management and continuous improvement
The document discusses effective test automation in DevOps. It begins with an introduction of the speaker and an overview of the topics to be covered. These include test automation in DevOps, common obstacles to automation success, and the pillars of effective test automation regarding scope, approach, and test environment and data management. The document emphasizes that continuous testing requires reliable automated tests, stopping production when tests fail, and developing in small batches. It also outlines challenges around test environments and data availability hindering automation goals.
10-3 Clinical Informatics System Selection & ImplementationCorinn Pope
Section ten, module three of the clinical informatics course discusses the information system lifecycle. In this slide deck, we'll cover how to pick a clinical information system that works best for you. Also included are three free practice questions. If you would like more information or resources, be sure to check out our site at http://www.informaticspro.com.
The document discusses several approaches to system development including the waterfall model, prototyping model, incremental model, and spiral model. The waterfall model involves sequential phases from requirements analysis to maintenance. The prototyping model develops initial prototypes to refine user requirements, while the incremental model delivers software in iterations. The spiral model combines elements of waterfall and prototyping, with risk analysis and evaluation at each phase.
The document outlines the key steps in creating a functional testing strategy:
1. Understanding system requirements to identify business processes, data, and security needs.
2. Identifying test scenarios to describe specific business processes to test.
3. Defining test objectives to ensure the system's functionality, data accuracy, and security.
Quality Assurance in Modern Software DevelopmentZahra Sadeghi
This document discusses quality assurance in modern software development. It begins by providing resources on the topic and outlining the agenda. It then reviews basic concepts of software, quality, and the differences between quality assurance and quality control. It introduces several quality models including McCall's quality model and discusses important factors in software quality. Finally, it covers quality assurance methodology using PDCA, quality management tools including Ishikawa diagrams and Pareto charts, and software quality testing. The document provides a comprehensive overview of key aspects of quality assurance in software development.
User expectations have changed over the last decade. Customers today expect access to their applications and data from all devices (mobile, laptop, desktop, tablet, etc.) with similar performance from any of those devices at all times of the day. In a world of growing complexity where architects and application designers are dependent on 3rd party providers to delivering part (or at time entire) of the application how does one ensure consistent delivery of performance. This presentation provides a view of some of the challenges involved and how not to make costly mistakes.
The document discusses software quality assurance and defines quality as meeting customer requirements within agreed timescales and costs, and providing customer satisfaction. It discusses standard definitions of software quality, views of quality, and quality criteria. Large software projects often fail due to quality problems. Software quality engineering aims to meet quality expectations through validation and verification activities. Its main tasks include quality planning, execution of quality assurance activities like testing, and measurement and analysis. A quality engineering process manages these activities to achieve preset quality goals.
How to Migrate Drug Safety and Pharmacovigilance Data Cost-Effectively and wi...Perficient
This document discusses challenges with data migration projects and provides an overview of a solution called Accel-Migrate. It notes that the success rate for the data migration portion of projects is typically between 16-60% due to issues like poorly defined scope and inadequate testing. Accel-Migrate aims to address these issues through an assessment, automated testing that verifies 100% of migrated data, validation inclusive of evidence for audits, and process reengineering support. The methodology employs pre-migration testing, configuration of migration software, and parallel process reengineering to allow for simultaneous technical and process work.
Learn how to establish a greater sense of confidence in your release cycle, along with the practices and processes to create a high-performing engineering culture within your team.
- The document outlines Polarion's test management software capabilities including creating and managing test cases, defects, requirements and specifications with Polarion LiveDocs. It allows defining and running test runs with the Polarion Testing Framework.
- It discusses how Polarion can help integrate requirements, testing and defect management and manage activities with all stakeholders.
- The presentation then demonstrates Polarion's abilities like requirements and test traceability, test planning and execution, impact analysis and reporting across projects.
The document discusses the Unified Process (UP) methodology for software development. It describes the key aspects of UP including iterative development with timeboxed iterations, four phases (inception, elaboration, construction, transition), architecture-centric and risk-driven approach, and nine core workflows (business modeling, requirements, design, implementation, test, deployment, project management, configuration management, environment). The document provides details on each of these aspects of UP and best practices for its implementation on a software project.
Software testing is the process of executing a software system to determine if it matches its specifications and operates as intended. Beta testing involves customers testing software for free, which provides test cases that represent real customer usage and helps determine what is most important to customers. Cutting testing costs can increase other costs like expensive customer support and loss of reliable customers.
Isabel Evans stopped drawing and painting after being told she was not very good at it, which led to a loss of confidence in her creative and professional abilities. However, she realized that attempting creative activities is important for cognitive and emotional development, and that making mistakes and learning from failures allows for growth. By reengaging with failure through art and with support from others, Isabel was able to regain confidence in her abilities and reboot her career. The document discusses different perspectives on failure and the importance of learning from mistakes.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
This document summarizes a half-day tutorial on test design for fully automated build architectures presented by Melissa Benua of mParticle at STAREAST 2018. The tutorial covered guiding principles for test design including prioritizing important and reliable tests, structuring automated pipelines around components, packages, and releases, and monitoring test results through code coverage, flaky test handling, and logging versus counters. It also included exercises mapping test cases to functional boundaries and categories of tests to pipeline stages.
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
The document summarizes a presentation about including databases in a continuous integration/delivery process. It discusses treating database code like application code by placing it under version control and integrating databases into the DevOps software development pipeline. This allows databases to be built, tested, and released like other software through continuous integration, delivery, and deployment.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Turn the Lights On: A Power Utility Company's Agile Transformation
T1
1. T1
Test Management
5/8/2014 9:45:00 AM
A Funny Thing Happened on the
Way to User Acceptance Testing
Presented by:
Randy Rice
Rice Consulting Services, Inc.
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Randy Rice
Rice Consulting Services, Inc.
A leading author, speaker, and consultant with more than thirty years of experience in the field
of software testing and software quality, Randy Rice has worked with organizations worldwide to
improve the quality of their information systems and optimize their testing processes. He is
coauthor (with William E. Perry) of Surviving the Top Ten Challenges of Software Testing and
Testing Dirty Systems. Randy is an officer of the American Software Testing Qualifications
Board (ASTQB). Founder, principal consultant, and trainer at Rice Consulting Services, Randy
can be contacted at riceconsulting.com where he publishes articles, newsletters, and other
content about software testing and software quality. Visit Randy’s blog.
3. 4/26/2014
1
A FUNNY THING
HAPPENED ON THE
WAY TO THE
ACCEPTANCE TEST
RANDALL W. RICE, CTAL
RICE CONSULTING SERVICES, INC.
WWW.RICECONSULTING.COM
2
THIS PRESENTATION
• The account of four different
acceptance tests, in three
organizations.
• The names have been withheld and
the data generalized to protect
privacy.
• One project was in-house developed
and the other three were vendor-
developed systems.
• So, a more traditional UAT approach
was taken.
4. 4/26/2014
2
3
A COMMON
PERCEPTION OF UAT
• UAT is often seen as that last golden moment or phase of
testing, where
• Users give feedback/acceptance
• Minor problems are identified and fixed
• The project is implemented on time
• High fives, all around
4
IN REALITY…
• UAT is one of the most risky and
explosive levels of testing.
• UAT is greatly needed, but happens at
the worst time to find major defects –
at the end of the project.
• Users may be unfriendly to the new
system
• They like the current one just fine,
thank you.
• Much of your UAT planning may be
ignored.
• People tend to underestimate how
many cycles of regression testing are
needed.
5. 4/26/2014
3
5
THERE ARE MANY
QUESTIONS ABOUT UAT
• Who plans it?
• Who performs it?
• Should it only be manual in nature?
• What is the basis for test design and
evaluation?
• When should it be performed?
• Where should it be performed?
• Who leads it?
• How much weight should be given to it?
6
PROJECT #1
• Medical laboratory testing business
that closely resembles a manufacturing
environment.
• New Technology for the company and
for the industry.
• The previous project had failed
• The company almost went out of
business because of it!
• Very high growth in terms of both
business and employees.
• Company at risk of failure.
• This project was truly mission-critical.
6. 4/26/2014
4
7
PROJECT #1 – CONT’D.
• Few functional requirements.
• 8 pages for over 400 modules
• Test team had little knowledge of:
• subject matter,
• test design,
• or testing at all.
• Very little unit or integration testing
being done by developers.
• Some system testing was done.
• UAT was the focus.
8
DEFECT DISCOVERY
AND BACKLOG
System Test UAT 1st Deploy
2nd Deploy
3rd Deploy
4 weeks 3 weeks
7. 4/26/2014
5
9
PROJECT #1 RESULTS
• Very high defect levels in testing.
• Many were resolved before implementation.
• Severe performance problems.
• Old system could process 8,000 units/day
• New system could process 400 units/day
• Many problems due to the new technology being used
• “bleeding edge” issues
• “Deadline or else” attitude
• The business was under extreme pressure to deploy due to
increased processing volumes.
• System was de-installed/re-installed 3 times before
performance was acceptable to deploy.
10
WHAT WE LEARNED
• Requirements are important.
• Even if you have to create some form of
them after the software has been
written.
• Early testing is important.
• That would have caught early
performance bottlenecks.
• Teamwork is critical.
• Things got so bad we had to have a “do
over.”
• The deadline is a bad criteria for
implementation.
• Always have a “Plan B”.
8. 4/26/2014
6
11
UAT LESSONS
• Build good relationships with subject
matter experts.
• They often determine acceptance
• Listen to the end-users.
• Understand what’s important
• Don’t rely on UAT for defect detection.
• Interesting factoid
• A similar project with the exact same
technology failed due to performance
errors two years later for a city water
utility. $1.8 million lawsuit lost by the
vendor.
12
PROJECT #2
• Same company as before, but two
years later
• Integration of a vendor-developed
and customized accounting
system
• Lots of defects in the vendor
system
• Implemented two months late with
practically zero defects.
9. 4/26/2014
7
13
WHAT MADE THE
DIFFERENCE?
• Same people – testers, IT manager, developer
• Different project manager who was a big
supporter of testing
• More experience with the technology
• Better understanding of testing and test design
• A repeatable process
• Less pressure to implement
• Having a contingency plan
• Having the courage to delay deployment in favor
of quality.
• The financials had to be right.
14
PROJECT #3
• New government-sponsored entity.
• Everything was new – building, people,
systems
• System was a vendor-developed
workers compensation system.
• Some customization
• Little documentation
• Designed all tests based on business
scenarios.
• We had no idea of the UI design.
10. 4/26/2014
8
15
KEY FACTORS
• No end-users in place at first to help with
any UAT planning.
• In fact, we had to train the end-users in
the system and the business.
• Lots of test planning was involved
• 50% or more effort in planning and
optimizing tests.
• This paid off big in test execution and
training
16
RESULTS
• Tested 700 modules with 250 business
scenario tests.
• We had designed over 300 tests
• The management and test team felt
confident after 250 tests we had covered
enough of the system.
• Found many defects in a system that
had been in use in other companies for
years.
• Reused a lot of the testware as training
aids.
• Successful launch of the organization
and system.
11. 4/26/2014
9
17
HARD LESSONS
LEARNED
• “You don’t know what you don’t know”
AND “You sometimes don’t know what you
think you know.”
• Newly hired SME with over 30 years
workers comp experience provided
information that was different (correct)
than what we had been told during test
design.
• We had to assign two people for two weeks
to create new tests.
• These were complex financial functions –
we couldn’t make it up on the fly.
18
HARD LESSONS
LEARNED (2)
• Real users are needed for UAT.
• Sometimes the heavy lifting of test
design may be done by other testers,
but users need heavy involvement.
12. 4/26/2014
10
19
PROJECT #4
• State government, Legal application
• Vendor-developed and customized
• Highly complex system purchased to replace two co-
existing systems.
• Half of the counties in the state used one system, the other
half used another.
• Usability factors were low on the new system
• Data conversion correctness was critical
20
THE GOOD SIDE
• Well-defined project processes
• Highly engaged management and
stakeholders
• Good project planning and tracking
• Incremental implementation strategy
• The entire system was implemented,
only one county at a time.
• Heavy system testing
• Good team attitude
13. 4/26/2014
11
21
THE CHALLENGES
• The system’s learning curve was very high.
• The key stakeholders set a high bar for acceptance.
• The actual users were few in number and were only able to
perform a few of the planned tests.
• Very high defect levels.
22
LEADING UP TO
VENDOR SELECTION
• Over 2 years of meeting with users and
stakeholders to determine business
needs.
• Included:
• JAD sessions
• Creation of “as-is” and “to-be” use
cases
• Set of acceptance criteria
(approximately 350 acceptance criteria
items)
14. 4/26/2014
12
23
THE STRATEGY
• Create test scenarios that described
the trail to follow in testing a task,
but not to the level of keystrokes.
• Based on use cases.
• The problem turned out to be that
even the BAs and trainers had
difficulty in performing the scenarios.
• System complexity was high.
• Training had not been conducted.
• Usability was low
24
DEFECT DISCOVERY
AND BACKLOG
System Test UAT 1st Deploy
10 weeks 4 weeks
750
250
15. 4/26/2014
13
25
WHAT WAS
VALIDATED
• The precise “point and click” scripts provided
by the vendor were long and difficult to
perform.
• Each one took days.
• Plus, there were errors in the scripts and
differences between what the script indicated
and what the system did.
26
THE BIG SURPRISES
• We planned the system test to be a practice run for UAT.
• It turned out to be the most productive phase of testing in
terms of finding defects.
• We planned for a 10 week UAT effort with 10 users
• It turned out to be a 2 week effort with 4 users.
• First sense of trouble: initial users were exhausted after 3
days of a pre-test effort.
16. 4/26/2014
14
27
THE BIG SURPRISES (2)
• We used none of the planned tests (around 350 scenarios)
in UAT.
• Instead, it was a guided “happy path” walkthrough, noting
problems along the way.
• Defects were found, but the majority of defects had been
found in system test.
28
LESSONS LEARNED
• The early system test was invaluable in
finding defects.
• Learning the system is critical for users in
new systems before they are able to test it.
• The test documentation is not enough to
provide context of how the system works.
• It took a lot of flexibility on the part of
everyone (client, vendor, testers, users,
stakeholders) to make it to the first
implementation.
• Sometimes actual users just aren’t able to
perform a rigorous test.
17. 4/26/2014
15
29
WHAT CAN WE LEARN FROM
ALL THESE PROJECTS?
• UAT is a much-needed test, but happens at the worst
possible time – just before implementation.
• You can take some of the late defect impact away with
system testing and reviews.
• You can lessen the risk of deployment by implementing to
a smaller and lower risk user base first.
• Actual end-users are good for performing UAT, but much
depends on what you are testing and the capabilities of
the users.
• The reality is the users are going to have to use the system
in real-life anyway.
• However, not all users are good testers!
30
WHAT CAN WE
LEARN? (2)
• Be careful how much time and effort
you invest in planning for UAT before
the capabilities are truly known.
• That is, senior management may want
actual users to test for 8 weeks, but if
the people aren’t available or can’t
handle the load, then it probably isn’t
going to happen.
• Don’t place all the weight of testing on
UAT.
• In project #4 our system testing found
a majority of the defects.
18. 4/26/2014
16
31
WHAT CAN WE
LEARN? (3)
• UAT test planning isn’t bad, just expect
changes.
• People, software, business, timelines –
they all change.
• Try to optimize and prioritize.
• Example: If you have 500 points of
acceptance criteria, can they be
validated with 200 tests?
• Which of the acceptance criteria are
critical, needed and “nice to have”?
32
19. 4/26/2014
17
33
BIO - RANDALL W. RICE
• Over 35 years experience in building and testing
information systems in a variety of industries
and technical environments
• ASTQB Certified Tester – Foundation level,
Advanced level (Full)
• Director, American Software Testing Qualification
Board (ASTQB)
• Chairperson, 1995 - 2000 QAI’’’’s annual software
testing conference
• Co-author with William E. Perry, Surviving the
Top Ten Challenges of Software Testing and
Testing Dirty Systems
• Principal Consultant and Trainer, Rice
Consulting Services, Inc.
34
CONTACT INFORMATION
Randall W. Rice, CTAL
Rice Consulting Services, Inc.
P.O. Box 892003
Oklahoma City, OK 73170
Ph: 405-691-8075
Fax: 405-691-1441
Web site: www.riceconsulting.com
e-mail: rrice@riceconsulting.com