Share via


Using Instrumented Builds to Analyze Testing and Find Dead Code

As part of my weekly duties since joining the Quality Tools team, I have been providing code coverage analysis to the team, ensuring that our automated tests cover a required percentage of our shipping code. This has been a fun exercise for me, allowing me to both gain a better understanding of what code coverage means from a quality standpoint, as well as flexing my knowledge of PivotTables and charts in Excel to provide user-friendly results. Performing basic code coverage of automated tests (such as unit tests) can be done by following the instructions at the Quality Tools blog at https://blogs.msdn.com/vstsqualitytools/archive/2005/06/08/426979.aspx. A further dive into the details of Code Coverage can be found at https://blogs.msdn.com/ms_joc/articles/406608.aspx.

For this particular exercise, rather than analyzing the coverage of automation tests, we opted for a different approach: instrument a build and collect coverage information as the tester performs ad-hoc/manual testing. The process (in a nutshell) looks like:

· Build the Application Under Test (AUT)’s binaries.

· Instrument the binaries that you wish to collect coverage on. (using vsinstr –coverage)

· Create a Setup Package containing the instrumented assemblies.

· Each Tester installs the Setup Package, and turns on data collection (using vsperfcmd)

· Each Tester performs a pass through the AUT.

· When vsperfcmd is shut down, a .coverage file is generated.

· Analyze the results of all of the .coverage files.

· Merge the separate .coverage files into a single result.

Now, once you’ve accomplished the work above, you’ve got yourself two important artifacts:

· One coverage file for each tester’s work.

· One coverage file that merges all tester’s work.

The two of these can used to be answer some interesting questions about both the testing process, and the application under test.

· What classes/methods were never covered?

o How can you tell? Look for classes/methods with 0% coverage in the merged result.

o What does this mean? This generally indicates either scenarios that aren’t being tested, or “dead code” that, while covered by the automated tests that were created when it was checked in, is no longer used by the application.

o What else can I tell from this? What kind of classes are getting 0% coverage? If there are lots of exception classes that are uncovered, this may indicate that testers aren’t evaluating error conditions (ex: opening a file from the application with invalid data). You may also find that there was an application state (such as docked, minimized, or placed in the system tray) that is required to test certain code paths.

· Is each tester testing the same things?

o How can you tell? Look for classes where the coverage for each tester is reasonably similar to each other and the overall merged result.

o What does this mean? If the merged result is higher than the top individual tester coverage, this indicates that each tester exercised different code. On the other hand, a class where all testers cover ~70%, and the merged result is also ~70%, this indicates that all testers were covering the same scenarios and code paths.

Performing this testing on our own group has already helped us to identify over 700 blocks of dead code, several scenarios that weren’t fully being tested, and some insights on the similarities and differences between each tester’s approach. While these insights were helpful, making this a regular part of our process going forward will help provide continuous improvement in our testing process.

Comments