Freigeben über


KPIs and Team Foundation Server

I joined Microsoft in 1998 as a software design engineer in test, otherwise known as a tester   I worked in the Visual J++, Visual C++ and what eventually became the Visual C# team.  In all of those teams, we did a lot of scenario based testing.  For a time, I was responsible for a scenario that involved building a web application with Visual C++. 

Because the testing was scenario-based, it was hard to assign a pure pass/fail result to it.  Instead, we used something that we called an OGF (overall goodness factor).  This was a rating that a tester would assign to the scenario based on a number of factors.  When the scenario didn’t work at all, we gave it a ‘poor’ obviously, but sometimes when 80% was working, or 100% was working, albeit with poor performance we might give it a ‘fair’ or ‘good’.  In any case, we all used a fair bit of our own judgement when evaluating these scenarios. 

The report that we would produce highlighted each scenario in green if it was good, yellow if it was fair and red if it was poor.  This gave everyone a very clear idea of where we stood – without having to understand all of the factors each scenario owner considered before making their judgement.

Something that is analagous to an OGF in the business world is a KPI or key performance indicator.  KPIs are really useful for presenting complex data in an easy-to-read way.  They also usually provide logic for implementing the type of ‘gut feel’ that we used to apply in testing.  The canonical example of using KPIs is the a summary of sales activity.  Something like the following:

Sales Summary

A shape – in this case a green circle – is used to indicate the status of a given metric; another shape – in this case an arrow – is used to indicate the trend of the metric (increasing, decreasing, etc). 

As it turns out, because Team Foundation Server uses SQL Server 2005 as its data store, it’s pretty easy to create a KPI based on the data you collect about your software development.  I have a video below that illustrates how one might do that.

The example in my video was inspired by a Forrester study done in 2004 titled “Successful Change Management Requires Focus on Key Metrics”.  In that study (can’t post here for copyright reasons), there are a number of metrics that are suggested.  One of them is a metric that measures how many change requests fail – for example, a requirement that is rejected by the customer, a bug that wasn’t properly fixed, etc. 

In the sample video below, I am trying to create a KPI that is similiar to that metric.  What I am looking at is the average number of state changes per work item.  The idea is that if a work item changes state too many times (i.e. active -> fixed -> rejected -> fixed -> rejected etc etc), then you reasonably say that something is wrong. 

My KPI calculates the average number of state changes per work item and compares it to a goal to determine the status of the KPI.  It also compares the current average number of state changes against the value the previous week, to determine a trend.

As you’ll see from the video, there is very little logic required to implement something like this – SQL Server 2005 and the TFS Warehouse take care of almost everything for you.  If you have a chance please check this video out and let me know what you think.

What I particularly liked about doing this was the built-in way SQL Server 2005 has for helping you calculate what happened last week (or last day/year/FY/etc) to determine a trend.

The trend of a KPI would be useful if you are trying to determine if a test is running faster than before, or if a load test performance counter has a ‘better’ value now than before.  I’ll probably follow this post up with some example of implementing KPIs like that  – if anyone else has any other suggestions, please let me know.

Thanks!

Eric

Comments