Another metrics post
I’m in the middle of preparing a talk on effective use of metrics for an internal audience. I love to talk about metrics (this is where a “good” blogger would link to previous posts on the subject). I think my passion comes from the fact that I see so many huge mistakes made with metrics. Someone remind me to talk about bug metrics in a future post for a few prime examples.
In this presentation, I’m going to spend some time talking about metrics that are relevant and valuable early, in the middle, and in the late stages of product development. Early in the product cycle, you may want to measure things that help you determine if the schedule is valid or if your customer needs are being met. In the middle part of the project, things like code metrics, customer feedback from betas, or performance metrics may be important. In the late stages, teams tend to monitor bug trends a lot more, and may look at reliability data or additional customer data to determine if it’s “ok to ship”.
While this is all makes complete sense to me, what I see in practice is that teams generally determine their “ship criteria” metrics then measure those same things throughout the entire product cycle. This, of course, usually means that none of their metrics are valid early and only some of their metrics are valid halfway through a product cycle. Then they wonder why their product slips and ships with less than expected quality.
Does it make sense to anyone else that you should be measuring different things at different times? Of course there will be some criteria that you will measure throughout the product cycle, but I think there is a lifecycle where some metrics will phase in or out as appropriate. Am I out to lunch on this, or is this just another case of me stating the obvious?