Udostępnij za pośrednictwem


Software metrics primarily useful as negative indicators?

Dear Readers –

 

I was thinking about metrics, and it occurred to me that most of the metrics we commonly use in the industry are really good as negative indicators of quality, efficiency, testing, etc. but lousy positive indicators.  That is, most software metrics are really good at telling you when something is wrong with your project, but they don’t give you much assurance that the project is actually on the right track.  (I’m sure someone else has already thought of this, but I figured I’d pass on my random though nonetheless.)

 

Code coverage is a classic example.  Code coverage can be viewed as a measure of what you’re not testing.  I.e. if I have 60% code coverage then I know that 40% of my code is not being tested at all.  However, the 60% code coverage gives me no assurance that the testing of that 60% is actually any good, because there can of course be a vast combination of different code paths through any piece of complex code.

 

Bug numbers are another example.  A high rate of reported bugs is clearly a sign that something is amiss.  However, a low rate of reported bugs doesn’t necessarily mean you’re on track.  What if your QA team is off writing documents and not testing the product?  In that case a low rate of reported bugs is simply reflecting a lack of testing activity.  Similarly, a very low rate of bug fixing is often (though not always) a sign that something is amiss.  However, a high rate of bug fixing is not comforting – it may simply reflect that devs are rushing work in order to make the #’s look good.

 

Code complexity is another example.  High code complexity is generally a bad sign for your code’s maintainability (with the exception of some specific code patterns like a parsing or message handling function).  However, low code complexity doesn’t mean that your code is necessarily “good” in any other respect.  It could be utter trash, broken up into small functions.

 

Conformance of work estimates to actual time spent is another one.  If the actual time spent on a task is way longer (or shorter) than the estimate, that tells you that your estimation process is not very accurate.  However, if someone’s actual time spent on a task always closely jives with their estimates, that’s really not very comforting unless you’re absolutely, positively sure that no other aspect of their work has been compromised as a result (i.e. there’s no “distortion” as Robert D. Austin calls it in his excellent book Measuring and Managing Performance in Organizations, which I highly recommend to anyone interested in metrics).

 

By these comments I don’t mean to bash these metrics – they are very useful as a way of identifying potential problems and fixing them.  But they have to be viewed as specialized indicators, not numbers to be mindlessly met.  Most software metrics make great gauges but lousy controls.

 

Over & out!

Chris

Comments

  • Anonymous
    January 02, 2006
    The comment has been removed
  • Anonymous
    January 11, 2006
    The comment has been removed
  • Anonymous
    February 14, 2006
    It's better to know as early as possible that you are off the track than know it when customer tells you that you were out from the track.

    I see metrics as predictive items. Customers hopefully start to make measurable quality requirements one day. And at that day we have to be able to predict the quality thru the development. Earlier we have the measurement program going, better we can answer for customer how much their required quality costs.

    (Btw... my blog is about software quality & metrics, but unfortunately it's in Finnish.)
  • Anonymous
    November 26, 2007
    PingBack from http://feeds.maxblog.eu/item_938082.html