Let's Go Bust Some Silos!
Who plans the tests for your product? Who writes them? Who executes them?
On some product teams, developers write product code. That's it. If you're lucky, they even compile the code they write. Actually *launching* the application - let alone determining whether things work the way they should - these developers see as Someone Else's Responsibility.
On many product teams, developers write product code and then they put that code through whatever paces they think necessary in order to be reasonably certain that it does what they think it should do. Generally these paces are much less comprehensive than their testers would like.
On a few product teams, developers write product code and they also execute bunches of tests. Many of these tests are likely automated unit tests. Some of these tests are manual exploratory sessions. Regardless of what form any particular test takes, these developers aim to eliminate every last could-find-it-simply-by-following-a-checklist, don't-want-to-waste-my-tester's-time-finding-this defect.
I've heard about the first type of developer; I've never actually seen one. (Thank goodness!) I've worked with many developers of the second type. I haven't yet found a developer of the third type, although I've worked with a few who come close.
On some product teams, testers test. That's it. If you're lucky, they ask a buddy to review their tests before they execute them. Their tests are based on specifications which have little resemblance to the actual product. Their days are largely spent complaining about their developers, who although rarely seen are "obviously complete idiots since the daily builds are always junk of the highest order!"
On many product teams, testers design their tests long before there is any code to run them against. They review their tests with other testers and also with their developers. Once they start executing their tests, they find that some of the tests are no longer relevant, other tests require rework, and multitudes of new tests are necessary.
On a few product teams, testers spend time with their developers building a model of how the code works. They plan classes of tests and areas of focus rather than delineating multitudes of individual test cases. They work in a tight spin loop of plan-execute-review, continuously feeding what they learned during the current loop in to the next one. These testers look for checklist bugs as part of their larger focus on integration and system-level defects. Many of their tests are likely automated. Many others are likely manual exploratory sessions. Regardless of what form any particular test takes, these testers aim to find the most important issues first.
I've known testers of the first type. Much of my experience has been with testers of the second type. I know a few testers of the third type; they are incredibly effective and much in demand.
I characterize the first type of developers and testers as Doers. They are constantly Doing and always seem busy. Their efficacy, however, is not nearly so high as their busyness might seem to indicate.
I characterize the second type of developers and testers as Thinkers. They have discovered that whatever time they spend thinking will be more than paid back by greater efficiency and efficacy once they move on to doing. Unless of course they never make the transition and instead become mired in Analysis Paralysis!
I characterize the third type of developers and testers as Learners. They spend lots of time thinking, and they spend lots of time doing. They want to always be learning. The moment they stop learning - information about the product, information about writing or testing code, information about working with their counterparts in other disciplines - they stop and make adjustments before continuing. Losing their focus on learning information that adds value to their team and product is the main bugaboo for which they must keep watch.
One habit all of these types share is a tendency to think in silos. Developers write product code, and possibly some quantity of tests. Testers write tests, possibly test tools, and never product code. Have you ever considered whether another arrangement might work better?
What would happen if your feature team sat down together and planned all of the work for the milestone: the product code that needs to be written and the test missions that need to be executed? And then you divvied out the work however makes sense? Maybe you have a tester who can write GUI code, which task all of your developers despise. Maybe some of your tests could easily be automated at the unit level. Maybe some of your unit tests require specialized knowledge which one of your tester happens to have.
What would happen if we stopped putting people in silos and instead thought of our features teams as groups with people each of whom have a set of skills? One person knows a lot about writing code which is highly scalable. Another person enjoys writing user interface glue code. Another person designs award-winning GUIs. Another person is expert at testing security. Another person is highly skilled at finding every case the developer forgot to handle. Maybe this is all the same person. Maybe it's five people. Maybe it's fifty.
This is chock-full of unknowns, I know. I'm not saying any of this would actually work. I'm asking you to consider it, think about it. If you try it - in full or just one part - please let me know how it goes!
Comments
Anonymous
April 04, 2007
This is a great post and something that I've proposed at my company. We are a small team where silos make even less sense than on bigger teams. So far it hasn't had any traction. People have their comfort zones. The devs have started to write more tests, but only in certain parts of the release cycle when they feel they have time. I would love to hear of others who have made this happen.Anonymous
April 05, 2007
Going against the grain is scary but sometimes vital in our industry. It takes courage, and always involves risk. But calculated risk isn't always a bad thing. I'm not a fan of blindly adhering to "established best practices," "the Next Big Methodology (TM)," or established team models. Just because something is a best practice at Acme Corporation doesn't mean it's a best practice at Real World Inc. The business model is different, the staffing patterns are different, the problem domains are different, everything is different. You have to be willing, at some point, to accept he idea that a best practice for them isn't necessarily a best practice for you. Be flexible. Do something different. Experiment. Find out what actually works, and makes you more effective. Make the leap. At the end of it all, if you find out that it didn't work, you'll at least come out of the experiment knowing something that you didn't know before. And the acquisition of knowledge is never a wasted effort.Anonymous
April 06, 2007
The comment has been removedAnonymous
April 07, 2007
What's more important, creating features, or testing features? Different people may give you different answer, and the answer you would probably get if you asked lies somewhere around "it depends". The answer you want, is "neither - it's quality that's important". Sorry - it's saturday and I don't know if that makes sense. The shorter and more direct versionof my comment is that a non-silo'd approach can work as long as everyone gets what quality is. In fact, this sort of org, more than any other, would benefit from a dedicated QA person ("real" QA - not testers that we call QA). Such a person, reporting in parallel to the engineering manager could do a lot to make sure the right things were being done by the entire team. Time to get more coffee and stop babbling.