Fast Today Or Fast Tomorrow - Your Choice
My trainer has me timing my morning workouts as one way to track my progress. One day I realized that I had rushed through my workout, paying more attention to beating my previous time than to my form. Needless to say, that workout wasn't very effective.
It wasn't until I was talking about it with my trainer that I realized that experience was just like many others I have had and have watched others have. What's the point of rushing to beat time if quality suffers as a result?
Back in the day when I was first learning Windows programming, I specifically hunted down books that explained how to use the wizards because all those nitty-gritty details the wizards glossed over would only slow me down. Until of course I needed to do something the wizards didn't handle (which seemed to never to take very long), at which point I was completely lost, mired in ropes of wizard-generated code I didn't understand.
I find that unit tests save me enormous amounts of time debugging my code and fixing regressions, even factoring in the cost of modifying the tests to keep pace with product changes. I'm not always good at explaining their benefits, however, and so I have seen people skipping unit testing in order to get their code into production faster, only to spend double or triple the time they "saved" debugging that code once it's in production. And I have seen people spewing out huge amounts of test cases which they then proceed to completely ignore, so that the number of test cases failing due to "test issues" (i.e., test case bugs) grows and grows and grows.
One afternoon I was talking with a colleague who was rewriting some code he had written at four in the morning. "That's why I don't write code at four in the morning", I said. "But I have to get my work done", he replied. "So you don't have time to do it right the first time but you have time to do it over?" I asked. "Yes."
I may as well have not done my workout that morning, for all the good it didn't do me. The wizards got me to a skeleton app quickly but slowed me down soon thereafter. The team that ignored unit test failures spent time debugging their test cases later, and the team that ignored their test case failures spent time fighting product bugs. My friend may as well have not written that code seeing as how he had to write it all a second time.
Slow and steady wins the race. Doing it right the first time helps. Keeping it right thereafter helps too. All this "slowness" actually helps you go faster. At least that's what I've found. How about you?
*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great coding skills required.
Comments
- Anonymous
July 05, 2006
Michael,
I enjoyed your (two) posts - this one and "Speed Trap". My thoughts on the matter were a bit lengthy for a comment so I blogged about it.
Thanks for keeping things in perspective regardin pace, maintainability, design and testing...
Harris
http://hrboyceiii.blogspot.com/2006/07/hurry-upwait.html - Anonymous
July 06, 2006
Hi - that was a neat perspective on "faster or better?" In a tester's life, we are often faced with such choices when building the test fx - do you just want to wrap it up or give it the TLC it deserves and take the extra EE pain. And I must confess, I have moved from the "faster" to the "better" group gradually.
I agreee with the unit testing part and I suppose it is kinda universally agreed upon (at least in our div) that writing unit tests is really faster and better in the long run.
But I am faced with these choices in other matters...like say TDD. All the case studies that I have seen so far confess a 15-50% increase in development time, but also claim that their quality is 150-300% better. But, the metrics used to quantify quality are so wild! It is then, that the problem enters a difficult domain. How am I to convince myself or others that expending this extra effort initially will lead to benefits later esp when there is no documented result that proves so? This often presents itself as a difficult choice to make and the temptation is always to follow the faster route unless you are 120% sure that the other route is going to be better.
On a tangential note, I guess this is where most testers face the greatest amount of confusion. Out of all the metrics that we have - CC, BVTs, bug numbers, OGFs...how do you give one solid quantifier for quality? I would love to hear more about this from you. - Anonymous
July 19, 2006
The comment has been removed