Freigeben über


Defective Bug Reports

The ‘quality’ of defect reports is something testers and managers talk about quite frequently. But quality is usually vaguely defined and is always interpreted differently depending on the context or perceptions of the individuals discussing quality. We all have our own perspectives on what constitutes a high quality defect report, but do we really understand the quality our customers (the developers we work with) expect in a bug report?

It’s funny that many testers always talk about being advocates for the customer, but always seem to fail to consider their internal teammates as customers of specific testing deliverables such as defect reports. So, in order to give ‘quality’ some context I asked 600 developers to identify their top 10 pain points regarding defect reports. The results are aggregated in the top 10 list below. By understanding the customer pain points we can reframe our discussion around specific customer issues and then put the quality of defect reports in the correct context to improve our customer's satisfaction.

1. Incomplete repro steps

Over 95% of the respondents indicated the steps to reproduce errors as the most significant problem with defect reports. Defect reports are submitted by people other than testers, but since this was the number one complaint among developers it seems to indicate a serious problem. If developers cannot reproduce defects accurately, the cost of that defect increases because the report bounces back and forth between the developer and the tester wasting valuable time. Also, a defect that can’t be immediately reproduced will usually go to the bottom of the pile, and may get overlooked (especially if testers are diligent if following up on the defects they report), or worse become hidden or much harder to reproduce.

2. Email type discussions in the report

If defect reports are too verbose the critical details the developer is looking for is lost in all the words! Defect reports are technical documents. Defect reports should contain enough factual information to get an issue resolved and briefly describe the customer impact. I would recommend that every tester (or anyone who works in software development) at least read a book or take a class on technical writing. A defect report should contain a brief (objective) description of the problem, the specific steps to reproduce the problem, the actual results, the expected results (see #7 below), and a customer impact statement. Some defect reports also include additional notes from troubleshooting, debug information, etc.

 

3. Lack of detail and investigation

 Defects manifest themselves in many ways. When a tester uncovers a defect it is their responsibility to troubleshoot the problem and ascertain other possible paths or ways a customer might encounter the defect. For example, if a defect is encountered when testing a web-lication running in Internet Explorer 7.0, does the same problem occur on Internet Explorer 6.1, or FireFox? Troubleshooting helps to isolate (or at least narrow down) the probable cause which in turn can effect a quicker fix by the development team.

4. Bug morphing

Bug morphing occurs when the original defect is fixed, but during regression testing of the original defect the tester discovers another problem which the tester assumes is either associated with that bug, or caused by the fix for the original problem. So, instead of creating a new bug report the tester amends the bug report to describe a different problem. A defect regression only occurs if the same exact steps cause the same exact problem, or if the tester can prove the root cause of the 2 defects are identical at the code level.

5. Missing data files

This was a surprising issue. In some cases files or test data is mentioned in the report but is not provided and in other instances the files are linked to a share that is inaccessible. If you have a specific test file used to reproduce the defect then provide that file to the development team.

6. Environment/configuration information

Occasionally, defects occur only under certain conditions or configurations. When testers encounter a defect they should not only attempt to reproduce the problem, but they should attempt to reproduce the problem on a different machine running a different configuration. If the problem is reproducible on two or three different machines the probability of the developer not being able to reproduce the defect is greatly reduced. In cases where the defect can only be reproduced on one machine, the tester must troubleshoot the problem to determine the differences between the systems. Veritest Analyzer 2.0 and Filemon and RegMon tools on SysInternals.com are great tools to help testers analyze differences in test systems.

 7. Expected Results

Software testing is similar to scientific experimentation. Experimentation typically doesn't involve mixing chemicals willy-nilly, or just doing something just to see what happens. Experiments are most often performed in controlled environments, using pre-formulated data or materials and have predicted outcomes. Similarly, software testing is conducted in various controlled environments (unit, component, integration, system), using pre-formulated data (both valid and invalid) and one of the most important aspects of testing is to measure the actual outcome of a test against the expected results (and of course to report those results). Even in situations where there is a lack of documentation testers should write a statement of the expected results and provide justification for those expectations. This can help the developer better understand the issue and possibly make a more informed decision on how to resolve the problem.

8. Duplicates

This is a difficult issue to address, and there is no way to eliminate this from occurring altogether in large software projects. However, there are perhaps ways to minimize this issue by addressing some of the issues above. I have occasionally seen testers literally throw temper tantrums when their defect report was resolved as a duplicate of a defect report submitted after the original report. There are several reasons why this might occur. Remember, defects manifest themselves in different ways. Developers are concerned with the root cause of the problem, not necessarily all the potential way or paths to hit the same problem. If a tester discovers multiple paths to the same problem and records each as a separate defect all but one will be resolved as a duplicate. Also, if several defect reports are submitted on the same issue it is likely the developer will use the defect report that is easiest to interpret and provides the most useful information. As a tester, I wouldn’t spend a great deal of time worrying about trivial issues such as this. (If you have a lot of dups, it is probably not a dev conspiracy against you.)

9. No Debug Information

It amazes me how many testers execute tests on a test platform that is incapable of capturing critical failures with a simple post-mortem debugger such as WinDbg. Obviously not all defects result in an access violation, but when an access violation or other type of exception occurs it is important that testers capture information that will help developers troubleshoot the problem. This is especially important for random or intermittent exceptions that are hard to reproduce. A debugger can be used to create a dump file which is essentially a snap shot of the system’s memory at the time of the failure. Developers can use a dump file to help isolate the problem, even if the exception is hard to reproduce. I highly recommend that testers talk with their development team to determine exactly what types of information testers should include in their defect reports when reporting access violations.

 

10. Multiple bugs in one report

I was also a bit surprised about this complaint. I would think that testers would not group a lot of bugs together (because we all want higher bug counts), but I guess the developers are saying otherwise. I have usually seen this when testers perform ad hoc or exploratory testing and through a series of steps come across different bugs, but report the different symptoms they encountered in one defect report. I think (hope) this problem is limited to more inexperienced testers.

 

These issues may or may not apply to your testing organization. But, when I showed this list to several test managers at Microsoft they were quite surprised. Those test managers were interested in providing direction to their teams on writing higher quality defect reports but never considered consulting with the development team, and their proposals missed some of the pain points developers in their group were complaining about. While many consider testers as the advocates for our “external” customers, perhaps it is our product team members (developers, managers, etc) who are really the key customers of the tangible artifacts produced by the testing team.

Comments

  • Anonymous
    May 20, 2009
    When we report a bug our hope is that bug is fixed. But, of course we know that isn’t always the case

  • Anonymous
    May 27, 2009
    The comment has been removed

  • Anonymous
    June 02, 2009
    I agree that the product life cycle and organizational demands (or values) dictate how much time a person spends investigating certain bugs. But, I respectfully disagree with your statement that "Good testers write consistenly good bugs, excellent testers write the best bugs for the cicumstances [sic]." In fact, one of the hallmarks of 'excellent' professional testers is consistency!