Visual Studio Team System “off-road” code coverage analysis and reporting experience
Team System “off-road” code coverage analysis and reporting experience
John Cunningham , Developer Lead, Diagnostics team, VSTS
What again?
This article is actually a follow on to an article I wrote about collecting coverage data using some of the Team System coverage infrastructure pieces. You can read that article here. One of the common requests after reading that article is “I want to do some fun stuff with the data”. Although I’m particularly proud of the work our sibling team did to visualize coverage in the IDE, (you guys know who you are), I understand the need to twist, pivot and make sense of the data for your specific scenarios. While we are considering these sorts of advanced features for next time around, this paper talks about ways in which you can roll-your own reporting and visualization from the infrastructure components today.
First things first!
I’m assuming you have collected some coverage data, and have the .coverage file and the instrumented executables and pdbs that were used to collect the data. That’s crucial to analysis working. If you don’t know how to do that – go back to the first article, linked to above.
If we’re clear on that, we can now begin. I’m going to talk about a couple of ways in which we can manipulate and report on data; using XML and XSLT, and programmatically using the coverage analysis assembly.
Pretty and shiny as fast as you can! - XML and XSLT
To get an XML representation of the coverage data in the .coverage file, the simplest way is to load that file into Visual Studio (File->Open->File works), and then use the export button on the coverage window toolbar to export the XML. If you open the XML in IE, one of the first things you’ll notice is that we emit the XSD with each of these files, but I’ll talk over the schema here at a high level.
Coveragefile Collection of assembly data Collection of namespace data Collection of class data Collection of methoddata Collection of linedata Collection of source filenames |
So you can see the hierarchical data that we use to populate the coverage window, and the list of sourcefile data which we use to index the files referred to in the lines section of each method, which is ultimately used to color in the source files in the editor. Each of the entities under assemblies has a statistic for lines and blocks covered and not-covered, as well as the percentages. The linedata basically gives a “covered/not/partially” status, and a delimiting region defined by start column, start line, end column end line, as well as source file id.
I’m not going to do a deep XSLT class here. I already needed to ask for some help from colleagues as it turns out I suck at XML/XSLT, but I’ll do a little to show a common task. Imagine I wanted to provide a quick report that could be used to illustrate the coverage generated from some unit tests after a checkin. The report would list the percentage uncovered blocks for each assembly and color each entry based on some tolerance levels:
<30% Green
<60% Yellow
>60% Red
These are arbitrary for the sake of this article. You may want to consult some of the externally available papers on the correlation of block coverage to test quality.
<?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="https://www.w3.org/1999/XSL/Transform"> <xsl:param name="low_coverage" select="60"/> <xsl:param name="ok_coverage" select="30"/> <xsl:template match="/"> <html> <body> <h1>This is the coverage report</h1> <h2> Assembly coverage</h2> <table style="width:640;border:2px solid black;"> <tr> <td style="font-weight:bold">Assembly</td> <td style="text-align:right;font-weight:bold">Blocks not covered</td> <td style="text-align:right;font-weight:bold">% blocks not covered</td> </tr> <xsl:for-each select="//Module"> <tr> <td> <xsl:value-of select="ModuleName"/> </td> <td style="text-align:right"> <xsl:value-of select="BlocksNotCovered"/> </td> <td> <xsl:variable name="pct" select="(BlocksNotCovered div (BlocksNotCovered + BlocksCovered))*100"/> <xsl:attribute name="style"> text-align:right; <xsl:choose> <xsl:when test="number($pct >= $low_coverage)">background-color:red;</xsl:when> <xsl:when test="number($pct >= $ok_coverage)">background-color:yellow;</xsl:when> <xsl:otherwise>background-color:green;</xsl:otherwise> </xsl:choose> </xsl:attribute> <xsl:value-of select="$pct"/> </td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template>
</xsl:stylesheet> |
For a small coverage sample – this would result in the following output when combined
This is the coverage report
Assembly coverage
Assembly |
Blocks not covered |
% blocks not covered |
ClassLibrary1.dll |
817 |
66.04688763136622 |
ClassLibrary2.dll |
327 |
43.77510040160642 |
MainDriver.exe |
220 |
26.442307692307693 |
So you can see that very little XSLT code can go a long way in extrapolating some interesting data from the large XML file. Other folks I know have created scripts that color the source code according to the XML data and presented it in HTML for those members of the IT org who do not install Visual Studio. So go on, create some masterful ones, I’d love to hear what folks are doing with the data
Down to the metal again - Using the analysis assembly
The analysis assembly in question is Microsoft.VisualStudio.Coverage.Analysis.dll, and it lives in PrivateAssemblies in your VS Team Suite/Developer/Test edition installation. The following section talks about the classes and data schema as they shipped in VS Team System 2005, and we don’t guarantee that these would not change in future versions or perhaps even service packs for some reason.
The assembly has some key classes, and then an ability to create a strongly typed dataset object that is the analog of the XSD schema. The two key classes are CoverageInfoManager and CoverageInfo (both in the Microsoft.VisualStudio.CodeCoverage namespace). CoverageInfoManager is a singleton that is used to hold some state variables such as symbol and binary paths, and has a method to create a CoverageInfo object based on either a .coverage file, or an existing XML, or from joining to CoverageInfo objects together. Let me show you a snippet that creates data from an existing .coverage file
// Initialize global path vars String CoveragePath = System.IO.Path.GetDirectoryName(args[0]); CoverageInfoManager.ExePath = CoveragePath; CoverageInfoManager.SymPath = CoveragePath;
CoverageInfo myCovInfo = CoverageInfoManager.CreateInfoFromFile(args[0]); |
That call to CreateFromFile throws a CoverageCreateException exception when you have not attended to the ExePath and SymPath. Now that we have a CoverageInfo object, the way that we get meaningful data about the coverage we have is to build a dataset.
// The parameter needs to be set to null CoverageDS myCovDS = myCovInfo.BuildDataSet(null); |
The dataset for a code coverage file maps directly to the XML produced above. It is strongly typed, so you can expect instances of objects rather than opaque dataset rows. The tables contained are listed below in the snapshot of the XSD:
“That’s kinda cool JoC, but how do I use and manipulate the data in some useful way?”, you ask. Let’s take another abstract problem and use this Dataset to mine some interesting data. Say I want to write a utility to focus specifically on a few methods which I know are complex and I wanted to monitor closely what their coverage is. I may want to write a small text reporter that reports on those top 3 interesting methods.
We’d use the snippet below to get the method row entries corresponding to my methods, whether they were in different assemblies or not.
String[] InterestingMethods = { "ConstantlyBeingRegressed(int32)", "ReallyComplexMethod(string)", "OtherHardThing(int32)" }; StringBuilder sb = new StringBuilder("("); foreach (String s in InterestingMethods) { sb.Append("'"); sb.Append(s); sb.Append("',"); } sb.Replace(",",")",sb.Length-1,1); String searchmethods = sb.ToString();
CoverageDS.MethodRow[] mymethods = (CoverageDS.MethodRow[]) myCovDS.Method.Select("MethodName IN " + searchmethods); |
If there are methods with similar names in other assemblies, I’d need to filter this list by following the classkey to the class entry, then following that class’ namespacekey to the namespace entry, and possibly checking the modulename in that entry until I found the correct differentiator.
Now that I have that list, I want to push the data out, with some simple calculation based on the data.
foreach (CoverageDS.MethodRow mrow in mymethods) { uint totalblocks = mrow.BlocksNotCovered + mrow.BlocksCovered; float covratio = (float)mrow.BlocksNotCovered / (float)totalblocks; Console.WriteLine("{0,-40} {1,8} {2,8:P}", mrow.MethodFullName, mrow.BlocksNotCovered, covratio); } |
Finally I may want to actually dump the snippets of code that are not covered for each method if they fail some tolerance level. For this, we need to grab all the line entries that fit the “non-covered” and “part of this method” filter. With that list, we now need to grab the filenames so that we can read from those files, and spit out the chunks that we want to highlight as not covered.
foreach (CoverageDS.MethodRow mrow in mymethods) { CoverageDS.LinesRow[] mylines = (CoverageDS.LinesRow[]) myCovDS.Lines.Select("MethodKeyName = '" + mrow.MethodKeyName + "'"); foreach (CoverageDS.LinesRow lrow in mylines) { if (lrow.Coverage == CoverageDS.NotCovered || lrow.Coverage == CoverageDS.PartiallyCovered) { CoverageDS.SourceFileNamesRow sfn = myCovDS.SourceFileNames.FindBySourceFileID(lrow.SourceFileID); Console.WriteLine("{0,-40} {1,4} {2,4} {3,4} {4,4}", sfn.SourceFileName, lrow.LnStart, lrow.ColStart, lrow.LnEnd, lrow.ColEnd); } } } |
I’m going to leave the actual reading out chunks of the file and spitting them out as an exercise for the reader, but you get the general idea.
We’d like to welcome you to CoverageTown
Hopefully this has illustrated enough of the surface area of code coverage data for folks to be able to mine and integrate the info with whatever process they happen to be using. Some code coverage reports are part of the final shipping version of Team Foundation, and those make it very easy to get a high level picture of the test coverage. Driving off-road is very much about being able to tailor that to your specific need or scenario. Hopefully you’ve had some fun in this ride too.
Comments
- Anonymous
March 06, 2006
The comment has been removed