Udostępnij za pośrednictwem


Are software metrics and measurements really important?

On a recent flight back from Boston to Seattle I decided to read Measuring the Software Process: A practical guide to functional measurements by David Garmus and David Herron. The book does a really good job of explaining functional points, what they are, how to identify them, and how to use them for scheduling estimates, benchmarking, and process improvements. The use of functional points hasn’t exactly proliferated throughout the industry, so the overall value of the book in terms of practicality must be judged on a case by case basis. Microsoft doesn’t use functional points (at least not that I am aware of), so I read the book mainly for new insights and perspectives on measures in general, and to really understand the whole concept of functional points. In all honesty, it was quite a dull read, but that is not to say I didn’t learn anything.

In fact, I would say that the first two chapters of the book are excellent and perhaps well ahead of their time. Although the book was published in 1996, the first chapter discusses software as a business, and the second discusses performance measurements. Below are some key points I took from the book (primarily the 1st and 2nd chapters).

The business case for metrics and measurements

“Developing software is more of a business or a process than an art form,… a business or process needs to be managed through the use of various control functions.” “The key to successful risk management is in the ability to measure.” “In order to be successful a rigorous and well-thought path to managing these [risk] issues must be continuously developed. At the hear of it all is the notion of having key business measures. Industry gurus have told us for years that we need to measure what we manage.” These excerpts from the book succinctly illustrate the importance of a measurement program from a business perspective. If our only measures are bug counts and “smiley face type” customer satisfaction surveys how to we know if we are improving in critical areas of success that are important for the business or the customer? The metrics and measures that are critical for the success of a software project vary, so I won’t be so pretentious as to suggest one over another. However, I will say that without identifying critical “pain-points,” developing a formula to establish a baseline, and continue with a long term (at least 5 years) plan to assess strategic changes using (reasonably) consistent measures we shall never know if our processes are improving.

Why is measuring software process so hard?

“As software professionals, we have not fully acknowledged the fact that developing software is a business unto itself that requires unique measures and monitors.” “…the ability to consistently and accurately quantify return on investment for software technologies does not exist at the present time. ” So how do we really know if we are becoming more effective or efficient? Some companies use CMMI as a measure. But, let me say this about the CMM. I spoke to Watts Humphrey a few years ago when he introduced TSP/PSP, and asked him about the CMM with regards to measuring maturity level and he stated the CMM was not created as a tool to measure an organization’s capabilities or abilities. The book, Measuring the Software Process also reiterates that by stating, “There are no quantitative measures directly associated with the SEI maturity model. In other words, there is no opportunity, based on the results of the assessment, to determine the quantitative value of moving from a Level 1 organization to a Level 2 organization.” Watt’s did say the primary driving force behind Level measures for an organization was the military (which is understandable because of the bureaucratic policies and political oversight which force falsification and artificial inflation of facts). Also, we need to learn that short-term measures that are narrowly focused change behavior and result biased results. The key for successful measurement programs will be for an organization to identify the key performance indicators (KPI) that are critical for success of the business and identify a means to measure those effectively long term with minimal impact (measures should not be a distraction from the day to day work) to the organization. Most importantly, we all agree the role of testing is to provide information, but we need to start providing quantifiable information rather than “best-guess” or “feel-good” type estimations.

Are measures really important?

The conclusion from a case study in the book summarized the importance of measures and metrics very clearly. “… project productivity is increased as quality increases. In order to increase quality and productivity, weaknesses must be identified in the methods currently used and steps taken to strengthen these areas of our software development process. To accomplish this, factors must first be measured – the ones that influence productivity.” I think this speaks a lot to some of the recent trends in the industry such as test driven development (TDD). We all know that thoughtful design and unit testing is a generally a best practice (especially compared to simply writing a bunch of code and throwing it over the wall for a bunch of testers to bang on in hopes of finding all the defects). But, I doubt we can actually quantify the value in terms of cost or resource allocation. So, if we really want to know whether or not we are improving our effectiveness and efficiency then we should really spend some time understanding why measures are important, and define critical metrics (from both a business and customer standpoint).

The true value of a testing effort is in its ability to accurately assess risk and product ‘quality’ (however you define quality.) I wouldn’t pay a vendor to test a product if they couldn’t provide me with concrete evidence and empirical results on what was tested and how it was tested. Measuring software quality, measuring productivity, and measuring effectiveness are really hard problems. But, I suspect that as long as we ignore these issues, and unwilling to understand the value of metrics, or simply base decisions on short-term (biased) measures, the role of testing will continue to be viewed with skepticism and little more than simply glorified bug hunting.

Comments