13

After regression tests are done, bugs are filed and fixed. A new release build is made with bug fixes. Should you re-run regression tests again on new build or is it enough to do only re-testing of the part where there were bug fixes?

1
  • 3
    Antonow297296, if you find any answer as satisfiable, I would suggest to mark it as Accepted. It really motivate the contributors. Commented May 13, 2019 at 15:46

6 Answers 6

11

"Yes"

If you have an automated regression suite, you should definitely re-run it. If your organization is really on the ball, this will happen automatically along with all the unit tests and other tests as part of your continuous integration process.

If you don't have that luxury, then if you don't see a major risk, then testing only around the areas that were impacted by the bug is acceptable. In that scenario you should definitely document what you did and did not test, and why.

4
  • 6
    If you don't have an automated test suite, then you should reassign some of the coding team to write and maintain it. You may think you know what areas are/aren't impacted by a code change, but the very reason that bugs exist is that code doesn't always do what it is supposed to do. Commented May 13, 2019 at 18:17
  • 4
    @MontyHarder - I agree that there should be an automated test suite, and that if there isn't, the coding team should write and maintain one. That does not mean any automated tests will be written - it's sadly very common for management to prioritize getting features/fixes to production over having test automation. Then you have my sad situation, supporting a million plus lines of classic asp spaghetti code that can't be untangled enough for unit tests.
    – Kate Paulk
    Commented May 13, 2019 at 18:21
  • Although the rest of the answer makes it clear, you might want to consider changing "Yes" at the top for clarity. The question is not a yes/no question, it is an "A or B" question.
    – JBentley
    Commented May 14, 2019 at 11:49
  • The "Yes" means that even though it's an A or B question, the real answer is A and B. Think of it in the context of "Cream or ice cream?" "Yes."
    – Kate Paulk
    Commented May 14, 2019 at 15:38
6

It depends. What are the the risks?

One of the risks that come to mind are code-depencies. A fix in a piece of code could be used in a lot of location. Not just the location that the original defect occurred in. In this case you should analyze the code and decide if you need to also re-test all dependent locations.

When the application could endanger lives maybe a full regression run might be advisable.

Personally I would push to have everything covered by automated tests, so that full regressions are fast, cheap and run on each code change. I think you don't want waste time on balancing what you need to test or not.

4

Speaking only about the testing aspects:

Obviously, you should only re-test of the part where there were bug fixes.

The problem is to identify such area. One can do it:

  • Backwards: E.g., frontend components that do not feed from a service X will be less impacted by changes in X.
  • Forwards: E.g., if you change service X, frontend components that feed from X will be more often impacted.

This is Risk-Based Analysis: Given a situation and some goals, you will brainstorm the possible risks of failure and test accordingly. The scope of this testing will probably be part of your regression suite and some new test ideas.

Speaking from the project management perspective:

However, if such analysis is more expensive than running your whole regression suite and you are confident that the full regression suite is sufficient* for validating the changes, running the whole suite "blindly" is a good option as well.

* - I.e., new test ideas will probably not uncover new hidden important problems.

3

This is a pretty general question. It depends on a few factors. Here are some questions to consider:

  1. Do you have any test automation?
  2. Is the feature/area of bug fixes a critical area, possibly creating more risk?
  3. What is the scope of the bug, feature, area of test? What other features/areas does this touch? Is the code being fixed a shared function or class? What other areas of the application rely on that code?
  4. What are the bugs being fixed? Something trivial or complex? Is it an edge case? How much user traffic does the feature get?
  5. How much time do you have before the release?
  6. How familiar are you with the feature containing bug fixes?
  7. Do the bugs only occur in a specific browser, OS, device?

The more specific you can get on answering these questions for yourself, the better you can optimize your resources and risk/reward. The more familiar you are with product usage, feature scope, the more specific you can optimize your testing.

Ultimately, the process of testing/QA is to lower the risk of failure and increase the confidence that everything works as intended. At a minimum, always check the part where the bugs were fixed.

Let’s consider some general answers:

  1. If you have automation, run the automation.
  2. If it’s a more critical area, run more tests. Consider more end-to-end testing, verify everything works as intended.
  3. If you understand the scope of the bug/feature, the more specific you can target your testing. If the bugs are part of a self-contained feature, test that feature. If it’s shared with other features, be sure to test those as well. Utilize end-to-end, smoke tests, integration tests.
  4. The more complex the bug or feature, you should lean on more testing. If it’s a common, well-used part of the software, lean on more testing. If it’s an edge case, test that edge case and some smoke tests.
  5. If you don’t have a lot of time, verify the bug is fixed, run some smoke tests.
  6. The more familiar you are with the feature can help you determine how much time to spend on verifying bug fixes during regression testing.
  7. If the bug occurs only in specific browser, OS, device, always ensure you test that implementation. Depending on time and the nature of the bug, you can consider expanding to other browsers, OS’s, devices. Keep in mind that some bugs are browser, OS, device specific, so you only need to test those options.

Also, if you are comfortable in reading/writing code, consider looking at the diff in source code to understand the fix. This can help narrow down what to test and what areas of the software to test.

If you’re not comfortable with code, and you have a good relationship with the developer, have them walk you through the code change. Ask them for input on risk and to confirm the scope of the bug fixes and the feature being tested.

Keep in mind that regression testing is meant to ensure code changes don’t adversely affect existing features. By fixing bugs, you may uncover new bugs. There is no “one size fits all” solution to verifying regression bug fixes.

0

Yes, we should, at least we do.

As part of regression this is what we follow:

Manually:

  • Test all the impacted areas as part of the task (feature/bug) itself. We talk to respective devs to find out all of the 'impacted areas' (impacted by the changes made in this task) and include it in our testing.
  • So in a way we do a quick regression as part of the testing that task itself.
  • Create a test case sheet for all of the possible cases.
  • Test cases are categorized into: Smoke, Regression, Automation not required.
  • Smoke tests - Basic Add/Edit/Delete operation.
  • Regression tests - In depth testing, focused with breaking the application with all possible scenarios.

Automation:

  • Once first round of manual testing is completed and the build is stable, we start automating the feature.
  • First we automate smoke tests so that once ready we can run these tests on every build to make sure that the build is stable and can be used for further testing. We maintain is single smoke test file which contains all the tests marked as 'smoke' for every feature.
  • Once smoke tests automation is completed, we start with automation 'regression' tests from the test case sheet.
  • Once the regression tests are automated, we include them in the regression suite which we run daily (night mode).

Long story short: For us regression is not only about checking the side effects but also running tests which tries to break the application with all possible combinations/scenarios.

0

This a common scenario that has been faced by software testing company where bug fixes comes after regression testing. In that case it totally depends on the bug fixes and impact of the code committed on other areas of the application.

If committed code has impact on other areas then regression testing is required. Moreover, if you have automated regression suites then take help from those as it will save time and side by side test other parts of the application which are not automated.

If the committed code does not have large risk then testing only around the areas that were impacted by the bug is a good option. In that scenario you should document what you did and did not test.

Apart from this if you have ample amount of time the re-testing the whole application is beneficial.

Not the answer you're looking for? Browse other questions tagged or ask your own question.