Take One Down, Try It On, Ninety-nine More Possible Metrics On The Wall
Anutthara comments on my "Fast Today Or Fast Tomorrow" post:
How am I to convince myself or others that expending this extra effort initially will lead to benefits later esp when there is no documented result that proves so? This often presents itself as a difficult choice to make and the temptation is always to follow the faster route unless you are 120% sure that the other route is going to be better.
This is one question to which I have not found a good answer. It takes a willingness to believe that spending extra time now will save at least as much time later. When I first learned about unit testing it seemed obvious to me that writing unit tests would in fact save more time than they cost - I could see how they would catch bugs I was then spending time chasing in the debugger. Similarly, when I first learned about TDD I immediately saw how taking the tests I was already writing and writing them before I wrote any code wouldn't take any longer, might result in better tests, and likely would result in better code. And of course in both cases the first time they caught a stupid mistake I had made any remaining questions disappeared like quicksilver...
But it's not so obvious to everyone. I know one developer who simply prefers to test his code manually. When I started pair programming with Mario and Adrian they were skeptical about the value of unit tests and TDD but were willing to give them a try. Now that they've done both a bunch they're convinced. But I know other people who just aren't willing to give either a try at all - they don't see enough potential value to bother.
So my answer is "Give it an honest try for awhile and see what happens for you". Some people will, some people won't.
Out of all the metrics that we have - CC [code coverage], BVTs, bug numbers, OGFs [Overall Good Feeling]...how do you give one solid quantifier for quality?
You don't! Or rather, you can, but I can't tell you what it is.
There is no one metric that will be best for everyone. Every team is in a different context. At Microsoft we work hard to ship good products, but minimizing the cost of subsequent service releases is equally as important as getting a quality product out the door in the first place. It's hard to make forward progress on new features when your entire team is sidelined testing a hotfix! On the other hand, if you release a new version of your software every month bug fixes can be rolled in alongside the new features and post-release servicing isn't much of a concern.
The way to determine which metric(s) to use is to ask the people who will be consuming the data what questions they are looking to answer - to find out what it is they want to know. Don't be surprised if they don't really know! The core question, I think, is "How do we know when our product is good enough to ship?" That's where I would start, anyway. Be prepared for a bit of discussion!
The set of metrics you gather is eaily changed, so try one set for an iteration, learn what does and doesn't work, then adjust, react, and repeat. It won't take long to close in on the ones that are right for you.
*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great coding skills required.
Comments
- Anonymous
July 26, 2006
The comment has been removed - Anonymous
July 27, 2006
The best solution I've seen to that is to do a simple red/yellow/green chart of whatever metrics you are measuring. You can do the same for drill-down details. For example, for every chunk you are testing, red might mean "haven't started", yellow might mean "in progress" and/or "found issues", and green might mean "ready to go". Then when the chart is all green you can ship!