All the wonderful colors of code coverage
I'd like to share my colors with you:
R |
G |
B |
|
Coverage Not Touched Area |
230 |
128 |
165 |
Coverage Partially Touched Area |
191 |
210 |
249 |
Coverage Touched Area |
180 |
228 |
180 |
These are the colors I prefer for code coverage in Visual Studio and if you've used the first Beta of Visual Studio 2005 they'll probably seem very familiar. Now, let's talk about what each of these (do not) mean and I have to admit that my choice for Coverage Touched Area is a bit "dangerous" since people often associate green with "everything okay".
Coverage Not Touched Area
This looks straightforward at first. There is code that hasn't been executed, that simple. So all we need to do is add test code that does execute the uncovered code paths. Not necessarily. Some people will tell you that you need 100% code coverage and this simply doesn't make sense. If you just happen to reach 100% without any major obstacles that's great. But in many projects you will hit scenarios that are not easily automatable up to the point where automation would be much more expensive than manual testing (so you hopefully won't do it). And it's even more likely that you'll hit an issue like this:
public void Foo()
{
switch (_someInternalState)
{
case SomeInternalState.State1:
// Do the right thing for state 1
break;
case SomeInternalState.State2:
// Do the right thing for state 2
break;
case SomeInternalState.State3:
// Do the right thing for state 3
break;
default:
throw new Exception("The internal state is invalid.");
}
}
Using the default case for raising an exception is a common pattern when there is no meaningful default behavior. If the component containing this code works as intended there is no way for the caller to get the component into an invalid state in the first place. In case of managed code you may be able to force the component into an invalid state but it's not always that easy (no private reflection because of trust level, unmanaged code, the code in question is actually invoked by an external process, etc.). Even when it is you need to ask yourself if it's worth it. The alternative is a code review in order to verify that the default case does exactly what it's supposed to do in the rare event that it gets executed and call it a day.
Long story short, uncovered paths require your attention. They must be covered by additional automated or manual tests or through code review/inspection. If do not create automated tests be sure to document the results of manual test passes or code reviews. Finally, if it turns out that the uncovered code is dead code remove it (thanks to source control there is no reason for keeping code around that is not needed).
Coverage Partially Touched Area
This tends to cause some confusion. How can a single statement be only partially executed? Let's consider the following example:
[TestMethod]
public void FooTest()
{
Assert.AreEqual("At least one parameter is not true.",
Class1.Foo(true, false));
}
public static string Foo(bool b1, bool b2)
{
if (b1 && b2)
return "Both parameters are true.";
return "At least one parameter is not true.";
}
While this might be somewhat intuitive since the && operator uses short-circuit evaluation the real and only reason for that line of code being marked as partially touched is that not all of its corresponding IL instructions were executed. This may be the case when the compiled code contains branch instructions like for example the IL code equivalent to the if statement above. What all of this implies is an important fact if you want to understand code coverage with Visual Studio: Code coverage is performed on the IL in the compiled assemblies which is also the reason while it is possible to end up with uncovered blocks although every line of code is covered according to the source editor.
The code coverage feature, however, doesn't know anything about the high-level language you are using. It could be C# but it could as well be [insert favorite language here]. This causes an interesting side-effect. The code coverage shown above and the IL shown below are from a debug build without optimization giving us a coverage of 5 out of 7 blocks.
.method public hidebysig static string Foo(bool b1, bool b2) cil managed
{
.maxstack 2
.locals init (
[0] string CS$1$0000,
[1] bool CS$4$0001)
L_0000: nop
L_0001: ldarg.0
L_0002 : brfalse.s L_000a
L_0004: ldarg.1
L_0005: ldc.i4.0
L_0006: ceq
L_0008: br.s L_000b
L_000a: ldc.i4.1
L_000b: stloc.1
L_000c: ldloc.1
L_000d: brtrue.s L_0017
L_000f: ldstr "Both parameters are true."
L_0014: stloc.0
L_0015: br.s L_001f
L_0017: ldstr "At least one parameter is not true."
L_001c: stloc.0
L_001d: br.s L_001f
L_001f: ldloc.0
L_0020: ret
}
Now people probably think of performance when they hear optimization. But in this context you have to remember that optimization usually means generating code that is faster to execute rather than trying to preserve the exact structure of the original source code. That said, let's see what happens when we take the previous example and rerun on a build with optimization turned on.
public static string Foo(bool b1, bool b2)
{
if (b1 && b2)
return "Both parameters are true.";
return "At least one parameter is not true.";
}
All of a sudden the code coverage result is 3 out of 4 blocks covered although we haven't changed either the product or the test code. Turns out the compiler has rewritten the code in order to optimize it. Though the generated code is completely equivalent to the original, it messes with our results and defies our expectation:
.method public hidebysig static string Foo(bool b1, bool b2) cil managed
{
.maxstack 8
L_0000: ldarg.0
L_0001: brfalse.s L_000c
L_0003: ldarg.1
L_0004: brfalse.s L_000c
L_0006: ldstr "Both parameters are true."
L_000b: ret
L_000c: ldstr "At least one parameter is not true."
L_0011: ret
}
So, unoptimized builds may be more useful for getting code coverage data since the IL is closer to the original source than that of optimized builds. In other words, the same reasons that make it easier to debug builds that are not optimized also make it easier to understand code coverage numbers. In any case, if you have any partially touched areas identify which parts exactly are not covered and then treat them like any other untouched area.
Coverage Touched Area
[TestMethod]
public void FooTest()
{
Class1.Foo();
}
public static string Foo()
{
return "This is a hardcoded string.";
}
Looking at the code above it probably seems painfully obvious what's going on. Method Foo() is fully covered but since the test method is not doing any verification the only thing this tells us is that Foo() doesn't cause an unhandled exception. While this is important it is probably not what was intended. The problem with real tests is that usually there a couple of calls into the product and a couple of verification steps to check state and return values. Whether or not we've hit a certain code block is something code coverage tells us. But making sure that we have verified everything that needs to be verified is something the machine can't do for us which should be encouragement to double-check test code (ideally through code review) before trusting it.
This posting is provided "AS IS" with no warranties, and confers no rights.