62

Nothing can ruin your day like a poorly-written bug report.

I've seen several sets of guidelines for bug reporting in different organizations. In your opinion, what guidelines/steps are most essential for good bug reporting?

Feel free to share complete guideline recommendations from prior/current projects/organizations.

0

20 Answers 20

50

As a developer, this is the information I need to solve a problem:

  1. Steps to reproduce.
  2. Expected result.
  3. Actual result.

In that order. Anything less than that results in difficulty in either reproducing the issue or identifying differences in how the requirements are being interpreted. Anything more has the potential to confuse the developer.


Additional information

Or why I don't like your edit

Steps to reproduce should be a written, ordered list of steps which include as much detail as possible. Be careful when including the following in your report:

  • Screenshot:

    Only include a screenshot if it adds to the explanation of the bug. Do not use a screenshot as a way of proving that a bug exists. If a developer is not able to reproduce the issue then there is something missing from the steps. Work with them to figure out what it is (we don't bite, most of us anyway).

  • Video:

    A video is often more helpful than a screenshot because it will often include details of steps that are not documented but in general the same caution applies.

  • Notes:

    If you use a specific environment/tool to reproduce the bug, setting up this environment/tool should be included in the steps to reproduce.

7
  • 7
    Don't you find a need for a description of the problem? Commented Mar 26, 2012 at 20:58
  • 7
    This is actually my favorite answer so far. But it's missing two steps: 0: Search first (optional, but strongly encouraged) 0.5: provide complete system information (what device are you using? what version of relevant software [like OS and app]). Commented Mar 27, 2012 at 2:49
  • 3
    @ckenst I submit that knowing what the tester expected it to do is a more valuable addittion to test report than what it's actually supposed to do: If you don't know what it's supposed to do, knowing what the tester thought it was was supposed to do can be a big help. And if you do know what it's supposed to do, knowing what the tester thought it was supposed to do can help you spot spot gaps in tester understanding and education. Either way, there's a gain.
    – user867
    Commented May 11, 2012 at 2:25
  • 1
    @ckenst More importantly, if there's a discrepancy between what's supposed to happen and what the tester thinks should happen, the tester is unlikely to be aware of it, and therefore can't be expected to incude that information in the report.
    – user867
    Commented May 11, 2012 at 2:27
  • 2
    @dzieciou When I'm debugging a problem I expect there to be some back and forth. If I'm unable to reproduce a problem then I will care about environment, versions, etc. for the initial report anything more is unnecessary until I cannot reproduce the problem
    – pgraham
    Commented Jun 25, 2012 at 18:01
46

Issue Reports (Bug Reports) are one of the main communication methods that QAers use. You are creating a statement to your stakeholders - "I have found what I think is a problem, and here's my clear explanation of what it is and how you can see it too. Please look into this".

Understand the Audience for the Report

It's important to know who is going to read your Issue Reports, and what they are expected to do with them.

For some shops, the only readers will be yourself and one or more developers. If that's true, you can use all sorts of jargon and abbreviations that you each understand.

But in many shops, there will be lots of stakeholders who need to read what you write - other QAers, Developers, Support, Product Management, Documentation, Management, etc. It may become more important to use less jargon, and add more details.

There are a few special cases that must be kept in mind as well.

For example, if offshore testers or developers must read your Issue Reports, you'll need to pay special attention not to use confusing jargon or colloquialisms in your writing.

If customers will eventually read your Issue Reports, you'll need to be very, very careful in choosing your words. (By the way, this is not something I'd recommend, without first sending the Issue Reports to a skilled writer for "sanitizing" first.) You may even be better off having two different descriptions of the bug - one for internal consumption, and one for customers.

Choose a Good Summary/Title

Since the one-line Summary or Title is often what prompts someone to decide to read your Issue Report further, and is often the only piece of information about the bug showing up on summary reports, it's important to put some thought into this field.

The title should be short (because it may become truncated) and to-the-point. It's tempting to cut corners and write just a few words here "Program x crashed" - but clearly that's not useful.

Also, you don't need to include the text here that is already included in other fields of the bug tracking system. For most systems, items such as severity/priority, product, component, etc are tracked in their own fields. No use wasting valuable space in the Summary/Title on these.

Describe the problem concisely and effectively

In a paragraph or two, describe the problem. Here you can use more words than the Summary/Title will allow but still avoid wordiness. Include Steps to Reproduce the Problem you are Seeing Not every bug is fully reproducible. But, it's important to try to find out a relatively minimal set of steps to reproduce the problem and to note them in the Issue Report. This gives the developer a fighting chance of seeing what you are seeing and actually getting it fixed quickly.

Avoid steps that don't matter - those which have nothing to do with reproducing the bug. Including too many steps can waste time and lead to confusion.

Include all steps which actually seem to matter. Write them in a clear style, in a manner which avoids guesswork.

It takes a bit longer to narrow down the steps, but it's usually time well spent.

If the bug is not fully reproducible, indicate that clearly in the issue report.

Include the Results you Expected

Often, QAers know what is expected out of a sequence of steps better than Developers.

Sometimes, their expectations are right, sometimes they are wrong. Either way, include what you expected to see.

Include the Results you Actually Observed

Since this an Issue Report, we assume that something unexpected occurred. Note what actually occurred.

If you observed an error message, include that.

If you observed something significant in a log file, include that portion of the log.

Include Enough Details for Searching

Try to think like a Support person, or a Manager, or a new QAer who wasn't familiar with this Issue. What would you search for if you saw these same symptoms happen? If you are lucky enough to have a defect tracking system that includes full-text searching, then make sure to include these important keywords somewhere in the text of your Issue Reports.

If your defect tracking system has a "keywords" field, put them in there.

Explain the Effects on the Customer

Somewhere along the line, your Issue Report will be analyzed for Priority and/or Severity.

If you explain the effects of this problem on the customer, you'll have a better chance that the Issue is properly analyzed. And if the problem makes further testing impossible, make sure to indicate that fact (in this context, you are a customer, too).

Attach Anything Else that Could Help

Attach anything that could help clarify and/or debug the problem.

As they say "a picture is worth 1000 words". So, often attaching a picture of the problem can be very helpful. Log files, test files, etc, can also be attached if they will help reproduce or debug the problem.

Think about whether something is better off being attached (in order to avoid an overly-wordy Issue Report), or included within the body of the Report itself (in order to be useful during a search).

Avoid speculation

Report the facts - what you saw, what you expected to see, and any other symptoms. But in general, avoid speculation as to the root cause of the problem, unless you have sufficient facts to back up your speculation.

Speculation can send the reviewer on a wild goose chase and waste their time. It can also make this bug report appear as a search result for cases where it's not relevant.

Be careful of the tone of your report

Don't use a negative tone in your writing. Remember, your job is to describe the bug so that it can be fixed effectively and efficiently. There's no benefit in criticizing the developer here or the designer - you are their partners, you are both on the same side. Be objective and respectful.

Avoid duplication - search first

You don't want to waste people's time reading issue reports that have already been covered by someone else. And in some shops, you may be penalized for writing such bug reports. So, search first, using the kinds of keywords that you would write into your report if needed.

If the problem has already been written into an issue report, it's sometimes useful to add additional facts if you have them.

A few more bits that may help:

  1. Writing issue reports that work
  2. Picture is not worth a thousand words
  3. Non-reproducible Bugs
  4. Issue Tracking Template
7
  • 6
    I really like "be careful of the tone" and "understand your audience" (which of course are good practices in ALL communication). Coming across as "you owe me these fixes because you're lowly development" is an easy way to get your fix sent to the bottom of the pile.
    – corsiKa
    Commented Oct 10, 2011 at 16:43
  • 2
    It's really hard to get more than this list, anything I would have done Joe has listed. Nice work! :-)
    – MichaelF
    Commented Oct 11, 2011 at 20:45
  • 2
    This can be the answer rather than first answer! Commented May 2, 2014 at 9:45
  • 1
    A good summary / description is so key! When sorting through numerous defects, a poor summary can be the difference between my recognizing someone already logged it or my logging a duplicate.
    – CKlein
    Commented Jun 2, 2015 at 20:35
  • 1
    Coming back to read this answer 4+(!) years later, I have to say that this is a really good answer. My answer is only from the point of view of a developer receiving a defect report. This answer also give a lot of good information for more complicated situations such as when the reported issue challenges accepted requirements. Also agree with @CKlein that a good (i.e. searchable) summary is really important to avoiding duplicates.
    – pgraham
    Commented Sep 9, 2016 at 4:21
17

Only one thing i would like to add apart from Joe's contribution.

Don't point out two or more issues in the same bug report. If you feel there appears another different issue when you follow the same steps, raise it separately, otherwise there the chance it get missed.

1
  • 1
    +1 as n-in-1 is The Ultimate Mess Generator. Testing for almost 5 years in teams from 1 to 50 testers, I have learned that very little worse can happen to a bug than being messed with another one. Commented May 9, 2012 at 8:37
16

To expand on the link Phil K mentioned. Cem Kaner published a paper entitled "Bug Advocacy" which you can read about in a 100 page PDF at: http://www.kaner.com/pdfs/bugadvoc.pdf. It also forms the basis for the second BBST course.

Kaner outlines 4 major points of Bug Advocacy: (quoted directly from page 10.)

  1. The point of testing is to find bugs
  2. Bug reports are your primary work product. This is what people outside of the testing group will most notice and most remember of your work.
  3. The best tester isn't the one who finds the most bugs or embarrasses the most programmers. The best tester is the one who gets the most bugs fixed.
  4. Programmers operate under time constants and competing priorities. For example, outside of the 8-hour workday, some programmers prefer sleeping and watching Star Wars to fixing bugs.

A bug report is a tool that you use to sell the programmer on the idea of spending her time and energy fixing a bug.

I never really thought of it that way - bug reports as our primary work product. Something we can point to and say "this is what I did".

I haven't finished reading it and I'm sure a lot of it applies to the things previously mentioned.

I don't think Kaner mentions this, but I think it's important to remember there is a difference between bugs and issues. Bugs are anything that threatens the value of the product while issues are anything that threatens of the value of testing (or the value of the project and in particular the value of testing). Rapid Software testing teaches us that.

11

Here are a couple things that I look for in a bug report.

  1. Exact steps to reproduce. You might be able to get away with some slang, for example in our APP you can almost always press F1 to move to the next screen, so you might see someone say "F1 through until <>". But you can't just say "Go to this function, this order number". Unless it has a problem parsing order numbers that end in 3, giving me order 123 doesn't help. Telling me "an order with multiple back-ordered items", now that's something I can deal with.
  2. Tell me exactly what it was supposed to do. It's great if I can reproduce it. But if you don't know what it was supposed to do, how do you know it's not doing it? This is especially difficult when the analysts have no formal definition of what it's supposed to do. All too often we hear "Well, this report says XYZ, while that report says ABC. Please fix." Hmm, well do I make the first be ABC? Or the second be XYZ? Perhaps they're both wrong? Perhaps they're not even supposed to be reporting the same data even! So if the report can't tell me what it's supposed to do, you can't expect the development to fix anything.
  3. Don't disguise a feature request as a bug. You can't submit a bug report that says "This screen doesn't have a delete button". That's not a bug. "Well, it's not a correct screen. It doesn't have a delete button!" Okay, it's not a perfect screen. Why wasn't there a delete button in the original spec? Now if there used to be a delete button and it disappeared, that's a bug. But development only knows what the project was supposed to do back when it was supposed to do it. You can't wish something had been done then and call it a bug.
  4. Others that saw it (/not a duplicate report). Now, this isn't the responsibility of the originator of the report, because he has a responsibility to report bugs as soon as he sees them. But if a report exists, don't create a new report, rather comment on the old one. This does two things: first, it reduces the number of reports support and/or development has to sift through. Second, it shows that this is effecting multiple people, and potentially multiple systems/modules. This is an awesome clue for development to help nail down where the bug is. "Hmm, we'll there's only two mutually dependent modules between Alice's behaviour and Bob's follow up comment. Let's look there. It also helps to show the person reporting the bug is not a flake. Which leads into...
  5. It's actually a bug. I've seen lots of bugs that were simply people not knowing how to use the system. If your microwave cooks your food too hot, it's not a bug. If your microwave cooks your meat instead of defrosting it despite the fact you told it the type of meat and the correct size, that's a bug. You just thinking it needs 4 minutes instead of 2? Not a bug. You typing 4 on the keypad instead of 2? Not a bug. Yes, your microwave allows you to cook while it's empty. This is annoying when you meant to push timer instead of cook, but let's face it: you told it to cook! (I do this all the time, actually. Yes, it's annoying. Yes, it could use a better interface. But that's a feature request (see #2), not a bug!)
  6. It's important. Now, this one is a bit iffy, because I feel non-important bugs should be reported. (See my epic example of a non-important bug report.) But at the same time, you have to be willing to admit that it's not important. Some places will flat out reject non-important bugs. For example, some layout is 1 pixel off. Sorry, this is an internal application - we will never fix that. There will always be something better to spend our time on than that. But we might report it anyway so that at least it can be noted that we have problems, and if we ever do a graphic overhaul we might look into it. But bugs have to be categorized. At my firm, which oversees almost a hundred manufacturing plants, many of whom have friends in VPs and guys on "the board", they will go ape-$#!^ crazy over the fact that they have to press down arrow once more in this screen than that screen. I get it, that's annoying. But let's make sure our discount logic and pricing bugs are fixed first, eh? And when we finally iron out all of those, we can make your down arrow work a little better.

Some of these might be disguised rants. Well, that's because they are. That's what me, as a developer with no QA team, gets for reports. If people were to boil these down to a simple checklist of 6 items (reproduction steps, desired correct outcome, not actually a feature request, not a duplicate report, is actually not correct behaviour, and properly categorized) I wouldn't mind the reports so much.

Also as a developer, in my organization, I get "specs" from the analysts. They're supposed to go through and define what the correct behaviour is so the reporter doesn't have to. Often times they just send in the report and leave everything to the developer. So if you're putting in guidelines, make sure that the responsibilities are properly defined. Don't make development do everything, don't make the analysts do everything, don't make the QA team do everything, don't make the reporters do everything. Make it clear who is supposed to do what, and hold them accountable. There are dozens of configurations of responsibility you can use, and most of them work as long as you're consistent about who does what.

P.S. Hey analyst guys at my firm. I hope you're reading this! It's not a declaration of war, it's a white flag! :-)

2
  • if the application doesn't meet requirements or standards in some way, that is a bug/defect/issue.
    – Malachi
    Commented Mar 15, 2019 at 16:15
  • I think 5. and 6. add very much to pgraham’s answer. The actual impact, why it’s important is often overlooked. Software usually changes constantly and while we add to one area, another one can loose quality without much impact. Developers and QAs wasting time looking at non critical bug reports and regression tests might just miss that rather important customer scenario when they actually use the product. A feature can have 13 bugs and none of it matters because a customer wouldn’t hit them. Whereas a collection of 13 trivial bugs can add up to very poor CX in a specific customer scenario.
    – user1130
    Commented Sep 7, 2021 at 18:20
8

In my company, a good bug report:

  1. Describes the symptoms, including screenshots or stack traces if necessary
  2. Specifies exactly how to reproduce the problem
  3. Specifies why the author believes the bug is important, if it is not already clear

Some developers may prefer that the tester attempt to narrow down the problem as much as possible. My developers leave it up to me to decide how much diagnosis to provide.

In my little company, we usually understand each other's abbreviations and assumptions. In larger companies, it may be necessary to use more formal, explicit language.

Finally, I do not recommend blindly following any individual's bug reporting practices. Their circumstances are different from yours. Listen to their advice, then consider how those practices mesh with your own circumstances.

6

A couple of guidelines in addition to above-listed items

  1. Test Steps
  2. Snapshots for each step if possible (In case you are working in remote teams, to avoid to-and-fro email communications)
  3. Expected result vs Actual Result
  4. Environment Details - OS, Hardware, Software, Build version
  5. Logfile entries/values
  6. Nice to have - Preliminary investigation/analysis with supporting queries/assumptions to provide a couple of leads for a developer to check further
  7. Provide access to the test environment - URLs, Machines for the developer to check in case if needed
  8. Reference to BRD, FS, Design Document where implementation conflicts design/requirements
  9. Nice to Have - Triage Meeting / Issue Review meeting to run through the bugs once with the development team to provide a quick overview of issues before they start looking into it. F2F conversations are better than email/chat conversations sometimes
  10. Be Descriptive do not use Abbreviations, No implicit assumptions. Callout your understanding of functionality and how it conflicts with implementation
5

1.Short and meaningful title (which will give the exact problem statement)

2.Repro steps

3.What is the actual result

4.What is the expected result

5.If applicable screenshot/video to get the repro

6.What is the severity of the problem

7.What was the Testing env

8.What was the build no

9.What is the source (Test/PM/Dev/Design etc) of the bug. That means who is logging this bug

10.How found (functional test/performance test/ security test/ design review etc).

11.Assigned to (Triage/Dev/PM)

12.Area/Iteration in source control (where to keep this bug in source control)

4
  1. Good and precise title
  2. Brief description of the problem
  3. Actual and expected results
  4. Exact steps to reproduce (give as much information as possible i.e. browser version etc.)
  5. Screen shots where ever possible.
  6. Don't put multiple bugs in one bug report.
3

Bug reports should be...

...Clear

Bug reports should have:

  • Precise, descriptive summaries.
  • Informative, concise descriptions.
  • A neutral tone, avoiding complaints or conjecture.

...Reproducible

Bug reports should contain:

  • The simplest steps to reproduce the issue, or...
  • A failing test fixture for the bug.

...Specific

Only publish one bug per report accompanied by:

  • A detailed description of the issue focusing on the facts.
  • Expected and actual results.
  • Versions of software, platform and operating system used.
  • Crash data, including stack traces, log files, screen-shots and other relevant information.

...Unique

Please search for duplicates before reporting a bug and ensure summaries are include relevant keywords to make it easier for other users to find duplicates.

2

Although I am usually against hard coded templates, bug reports are an exception. Depending on the system being used for reporting, I show an empty template when a new bug is created, or add mandatory fields.

1
  • I have found severe problems in the past with this. The prompts are very good and the people finding the problems tend to fill them logically... but developers can get into a mindset of rejecting a bug report because a field wasn't filled in (even if the contents are obviously irrelevant; or covered explicitly by other detail)
    – itj
    Commented Jul 9, 2012 at 18:56
2

Checklist for publishing a good bug report:

  1. Look for duplicates
  2. Check for accuracy in all elements of the bug report
  3. Roleplay as a different recipient (for example, a Test Manager) and ask if the bug report is really useful to them
  4. Make a list of mistakes in the past with bug reporting and run it through the report
  5. Check for attachments, their sizes and relevancy to the context of this bug report
  6. Save a copy of your bug report in MS Word or other editors as everyone knows bug tracking systems, crash, too.
  7. Assure that you spell-check all of the text in the bug report – especially with bug title
  8. Assure that the bug report contains good grammar and your word choice is clear to all who will read this bug report

More here.

1

Read Bug Advocacy from Cem Kaner

http://www.kaner.com/pdfs/BugAdvocacy.pdf

2
  • 3
    +1 for an excellent link, but don't suppose you could sweeten it with a few lines about what people might learn from following it? Hate to think of folks overlooking this one...
    – testerab
    Commented Oct 10, 2011 at 22:01
  • I expanded on Phil K's post since this was a really good link. Commented Apr 14, 2012 at 16:53
1
  • Specs: Software Version, Database Version etc.
  • Description: The behavior expected vs the behavior met.
  • Reproduction: The exact input needed to run into the bug.
  • Routing: The "adress" where this problem will be solved or passed further.
1

BBST Bug advocacy course uses RIMGEA acronym that helps you to write and sell effective bug report.

I explained RIMGEA acronym in this blog series.

R stands for Replicate - you need to be able to list a set of steps that will reproduce the bug every time.

I stands for Isolate - eliminate steps that don't impact the bug, so that all you are left with is the shortest possible set of steps to reproduce.

M stands for Maximize - find a way to make the failure happen in the most spectacular way possible.

G stands for Generalize - find a way to take the failure from a corner case to a general problem.

E stands for Externalize - look for consequences to users. Ask yourself who will care about this failure and why?

A stands for Advocate - you need to write a good report that summarizes the results of the previous steps.

The more experienced you are, the less you will need to refer to these steps. They will show in your bug reports.

2
  • 2
    It would be better if you explain your answer here and use the link as a reference. Otherwise your answer might get flagged. Commented Feb 18, 2016 at 1:41
  • 1
    This did indeed get flagged for spam. I don't believe it is spam, necessarily, as I truly believe Karlo has great intentions and wants to share knowledge. However, the content will need to be summarized here, linking back to the main blog for further reading. Think we could do that?
    – corsiKa
    Commented Apr 5, 2016 at 22:11
1

when you said Guidelines I immediately thought about what we have for a Template of necessary information

  • Environment
  • URL
  • Parameters
  • Issue/Defect/Bug (whatever you call it)
  • Description if more details are needed
  • Steps to reproduce
  • Actual Result
  • Expected Result
  • Requirements/Acceptance Criteria that pertains to the issue
  • Is there a workaround?
  • Impact of the issue

most of the answers that I read were very clear about making the report about the issue and not about the people, I totally agree with this and it should be about the facts.

  • what is the severity of the issue if we don't fix it?
  • is the workaround something that we can let our clients/customers use until we fix it or does it need to be fixed immediately?

not a long answer but I think keeping the information to the necessary information is key.

1

I think my previous answer also applies here.

In my opinion, the best possible bug report (assuming that the person that sends the error report does not know the causes of the observed failure) is a bug report with reproducibility information, so that the developers are capable of reproducing the problem.

In several bug tracking sites, for example MySQL, we find literally thousands of bug reports labelled as "can't reproduce" or "can't repeat". Either because the user does not know how a bug should be reported or because the user does not know how to reproduce the error, the issue is the same: reports are sent with little or no historical information on how the error was reached (e.g. a report containing solely the stack trace and/or a memory snapshot and/or a textual description of the event that is both superficial and cumbersome). Sometimes little information is sufficient, but many times it is not, especially in errors that are induced by very specific conditions that take the execution through a very specific path.

Therefore, in short (and in my opinion), a good bug report is one that has sufficient information for the developers to reproduce the observed problem. This is often hard to provide, especially if no logging was performed during the failing execution.

There has been a lot of research in this area in the past decades, but unfortunately, there is no solution that solves all problems. In my case, I believe that record&replay systems (a.k.a. fault-replication systems) are more promising, as they automatically log sources of non-determinism during the user's execution and if/when a failure occurs they create bug-reports capable of deterministic replay. Still, they have problems regarding performance overhead, log size and privacy.


Regarding what makes a good bug report, you may find the following research papers to be interesting:


["What makes a good bug report?", IEEE Transactions on Software Engineering, T. Zimmermann, R. Premraj, N. Bettenburg, J. Sascha, A. Schroter, and C. Weiss]

Presents: i) a survey on how bug reports are used among 2,226 developers and reporters, out of which 466 responded; ii) empirical evidence for a mismatch between what developers expect and what reporters provide; iii) the CUEZILLA tool that measures the quality of bug reports and suggests how reporters could enhance their reports so that their problems get fixed sooner. Publicly available at: http://thomas-zimmermann.com/publications/files/bettenburg-fse-2008.pdf


["Debugging in the (very) large: ten years of implementation and experience", in Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles, K. Glerum, K. Kinshumann, S. Greenberg, G. Aul, V. Orgovan, G. Nichols, D. Grant, G. Loihle, and G. Hunt]

This is a paper from Microsoft regarding ten years of bug report analysis of the Windows Error Reporting System. You can find it publicly available at: http://research.microsoft.com/pubs/81176/sosp153-glerum-web.pdf


["Information needs in bug reports: improving cooperation between developers and users", in Proceedings of the 2010 ACM conference on Computer supported cooperative work, S. Breu, R. Premraj, J. Sillito, and T. Zimmermann]

A paper that analyses the questions asked in a sample of 600 bug reports from the Mozilla and Eclipse projects and also provides some suggestions to improve bug trackers. Link: http://people.ucalgary.ca/~sillito/work/cscw2010.pdf


["Who tested my software? testing as an organizationally cross-cutting activity", 2011 Software Quality Journal, M. Mantyl a, J. Iivonen, and J. Itkonen]

Examines testing activities in different industrial case companies, conducted not only by the specialized testers but also by multiple stakeholders. Link: http://lib.tkk.fi/Diss/2011/isbn9789526043395/article6.pdf


["Survey Reproduction of Defect Reporting in Industrial Software Development", 2011 International Symposium on Empirical Software Engineering and Measurement, Eero I. Laukkanen, Mika V. Mantyla]

Presents a survey of six industrial software development organizations about the defect report information (Seventy-four developers out of 142 completed the survey), from three viewpoints: concerning quality, usefulness and automation possibilities of the information. Link: https://courses.cs.ut.ee/MTAT.03.159/2015_spring/uploads/Main/SWT_bugrep2.pdf

0

Hope below answer helps:

Bug report provided by QA testing services should be precise so that the developer can understand the issue at once. Therefore, defect description should contain the following information:

a) Summary: The summary should be short and descriptive. It should be created in such a way that the bug base searching can point to the bug efficiently.

b) Pre-requisites: The pre-conditions/ requirements to reproduce the bug should be provided.

c) Steps to Reproduce: Step-wise description to reproduce the defect.

d) Actual Result: A short summary of what is actually happening. It should not be too detailed but should give a clear picture of the defect. It could be extended to 2-3 sentences to describe the observations.

e) Expected Result: The desired behaviour should be described in this section.

f) What's working: In case any workaround is available for the defect or the same functionality is working fine anywhere in the application. Then it should be provided in this section to ease the developer in rectifying the issue on backend.

g) Screen-shot or Video: Attach/Upload the screenshots or video to reproduce and understand the defect.

h) Environment: OS, Browser, Test Application Version details where the defect is occurring

Below is the sample bug report:

Summary: TV is not getting powered on after pressing the 'Power' button from remote.

Pre-requisites: 1) Ensure that TV and remote are available.

Steps to Reproduce: 1) Switch on the TV. 2) Hold the remote. 3) Press 'Power (green)' button from remote. 4) Observe the behaviour.

Actual Result: Power screen flashes for some time and the black screen is displayed.

Expected Result: TV should get started and Home screen should be displayed after clicking the 'Power' button from remote.

What's working: TV is getting started after clicking 'Power' button from TV (button available under TV screen's on border panel).

Environment: TV: Brand/Model

Source: Can you give me sample of bug report you have written?.

0

I think you must have already got your answer, but if you use a project management tracking tool Like JIRA or TestGear, you have already all these fields just to fill in. So beside that, you need to have excellent communication skills to convey your bug to team in minimum writing effort.

Good Luck

-1

Some great answers already, so just to add a little one:

In a team with any amount of testers greater than one, develop a style guide for bug reporting and stick to it. This will reduce by at least 50% the chance of a coder trying to strangle one of your teammates.

Not the answer you're looking for? Browse other questions tagged or ask your own question.