Поделиться через


How to make sense of Code Coverage metrics

(co-authored with Brad Wilson)

Even since the release of Visual Studio Team System 2005 with easy to use code coverage tools, people inside (and outside) of Microsoft have been talking about what it means. The most common approach we've seen is for a team leader to mandate a minimum code coverage number (like "all code must have at least 80% code coverage before being checked in"). As agile advocates, you'd expect that our team's TDD'd code would have 100% code coverage. And you'd be wrong.

We could all agree that test-covered code is better than not, so we probably also agree that a high coverage number is better than a low one. Why can't we come up with a hard and fast number that means "good"? The answer is that it's different for every project, and even for a single project, that number may change over time. It is inevitable that code is written which isn't covered by a unit test. A few examples of acceptable code without tests might include: web service wrappers generated by Visual Studio, views in a Model-View-Presenter system, and code for which failure is only possible because the underlying platform fails (like helper methods that pass default values into more complex .NET CLR methods).

So when you run that code coverage system for the first time and it pops up "73%", what next? Well, if it's the first time you've run the test, you're probably going to check out that other 27% and see how you categorize it. Does that code need coverage or not? If so, use the lack of code coverage to educate yourself on taking smaller steps in TDD. But if not, what then? Is 73% a magical number?

The answer lies in a metric that can be used to predict the weather, barometric pressure. Today, weather.com says the barometric pressure in Seattle is 29.66”Hg and falling. More important than the absolute measurement is the trend: it tells you that the overcast and drizzling rain outside is going to get worse, not better. The two pieces of information - value and trend - are used to predict what will happen next. The same can be said for code coverage. You can determine the relative health of your tests by using the value and the trend to determine the appropriate action. That’s the important bit: code coverage gives you relative measurements against itself, not an absolute measurement against a target value. If your coverage fell from 73% to 72%, would you be worried? What if it fell from 73% to 67%?

Rather than holding your team to an absolute goal, use the trend to determine when you should invest time to figure out where you're covered and not, and to determine what to do about it.

Comments

  • Anonymous
    February 26, 2007
    The comment has been removed

  • Anonymous
    February 26, 2007
    Jim Newkirk and Brad Wilson have an interesting blog post on code coverage and what it means . They have

  • Anonymous
    February 26, 2007
    These are good examples of my fundamental rule that metrics must be interpreted within context.  You can read more at http://blog.panopticode.org/articles/2007/02/26/metrics-must-be-interpreted-in-context

  • Anonymous
    February 27, 2007
    That's why I like mock frameworks (http://msmvps.com/blogs/paulomorgado/archive/2007/02/17/unit-testing-and-mock-frameworks.aspx). I can get near 100% code coverage (I want an absolute 100% but Visual Studio says 98% and can't paint any uncovered code). I usually mock my onw internal members to test my public members.

  • Anonymous
    March 02, 2007
    Very true, code coverage only revels what is executed, not what is actually verified, http://flickr.com/photos/niallkennedy/330227455/. What we really want are unit test analysis tools. Tools which analyze unit tests to determine how effective they are at detecting changes to the code under test. Jester is an example, (http://jester.sourceforge.net/), which makes changes to the code under test and then runs the unit tests to determine if the change is detected. This area is still fairly green, but certainly has a lot of potential.

  • Anonymous
    March 06, 2007
    James Newkirk has an interesting post on code coverage metrics . It's impossible to use a hard code coverage

  • Anonymous
    March 07, 2007
    Still real tired from my Oklahoma trip , partying with Raymond sure is exhausting-). Agile/Development

  • Anonymous
    December 03, 2008
    Still real tired from my Oklahoma trip , partying with Raymond sure is exhausting-). Agile/Development Tools On my short list for some time now, is to switch from NUnit to the definitely superior MbUnit. My friend Andrew has done some great work with

  • Anonymous
    February 05, 2009
    [ Nacsa Sándor , 2009. január 19. – február 5.] Ez a Team System változat fejlett eszközrendszert kínál

  • Anonymous
    February 05, 2009
    [ Nacsa Sándor , 2009. február 6.] Ez a Team System változat a webalkalmazások és –szolgáltatások teszteléséhez