Testing In The Wild
Recently I read Edward Hutchins's Cognition In The Wild. Edward researches human cognition. Rather than setting individuals to a purpose-built task in a research lab he observed navigation teams on U.S. Navy ships. He was interested in how groups think and in how the environment affects the way a person thinks. He believes that human cognition does not take place only within a person's brain; rather, that the tools and other items a person uses are an integral part of their thinking process.
Navigation is an excellent example of this. The process of plotting a ship's position requires multiple steps, starting with taking a bearing on multiple landmarks and ending with a triangle describing the ship's location plotted on a chart. Multiple tools are involved in this process, each of which embodies various computations of various complexities. Taking a bearing, for example, uses a device called a pelorus to sight the landmark and read the corresponding compass point off the pelorus's gyrocompass repeater. The gyrocompass is itself a tool which adjusts the ship's relative bearing for magnetic and other variances. Each subsequent step makes use of its own set of custom tools.
Many of these tools are the latest in a long line of ever more complex tools which embody ever more of the calculations. The development of these tools goes back centuries. All of them share at least one quality: the person using them need not know any of the theory behind them in order to use them.
When the ship is on the open sea the navigation process is typically done by a single person. In tighter quarters, however, a team of navigators works together. This team is composed of people with varying amounts of experience in navigation, from the pelorus operator, who tends to have the least experience, up through the fix plotter. This aligns with the growing complexity of the operations involved: the pelorus operator need only know how to sight a landmark, whereas plotting a fix requires more knowledge. This alignment means that each member of the team knows exactly how the data from the preceding steps was created, which both helps them notice when the data is suspect and also helps them identify what the problem might be.
The organization of the team directly supports the team's cognition in multiple ways. For one, a brand new sailor can be dropped into the team, given a short training session on sighting landmarks, and be immediately productive. As they carry out their duties they will overhear other members of the team talking and start to pick out additional parts of the process. For another, the organization makes clear the order in which data is gathered and transformed, and errors in this process tend to be noticed quickly. For example, in the normal course of things the pelorus operator will never talk directly to the fix plotter; if they do, something is likely amiss. In Edward's words, "The task world is constructed in such a way that the socially and conversationally appropriate thing to do given the tools at hand is also the computationally correct thing to do. That is, one can be functioning well before one knows what one is doing, and one can discover what one is doing in the course of doing it."
Reading this I suddenly realized one reason why scripted testing is so popular in certain realms (e.g., management types, especially those who don't know much about testing): scripted testing allows a person who knows nothing about testing to be immediately productive. All they have to do is execute the scripts and report their results and lo - testing is complete! Ignoring for the moment all discussion regarding the usefulness of scripted testing, testers in such environments often are not surrounded by more experienced testers and so are unable to pick up knowledge about testing by osmosis the way the pelorus operators pick up knowledge about navigation. If they were, executing scripted tests might be a useful way for newbie testers to learn the ropes.
One might argue that testing is much more complex and complicated than navigation. I am not sure this is true. If navigation appears to be straightforward and simple it is because the gory details have been largely encapsulated in various tools. Test is barely starting to create such tools. Static analysis tools like lint and FxCop spring to mind; a tester need not understand how they work or what they do to analyze the reports they create. Load testing tools fill the bill as well; a tester need not understand the machinations they use to simulate thousands and millions of people hitting a website in order to subject their hardware and software to such scenarios. Mnemonics and patterns fit here too, as crystallizations of partial solutions to frequently encountered problems.
I find it hard to conceive that testing might ever become as rote as the process for plotting a fix. What would it look like though, I wonder, if we ever do develop testing to the point where navigation is today? If we have as rich a toolset which encapsulates most of the difficult bits? If we find ways to run teams which make it difficult for defects to be injected in the first place, and easy to be found if they are? If software development is understood well enough and is reliable enough and is predictable enough to truly be called engineering? While we may never reach that point - and in fact we may not want to - moving closer than we are now seems to me a big pile of goodness.
*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.
Comments
Anonymous
August 08, 2007
"...scripted testing allows a person who knows nothing about testing to be immediately productive." Reframe: Scripted testing allows a person who knows nothing about testing to APPEAR immediately productive to another person who knows nothing about testing. "I find it hard to conceive that testing might ever become as rote as the process for plotting a fix." Quite right--because plotting a fix is something that happens in a fairly limited problem domain in a fairly limited set of contexts. Testing can be applied to any number of problem domains in any number of contexts. For example, you could test a plotted fix, a navigational tool, or anything else on a naval vessel... or any vessel... or any conveyance... or any medium (in the McLuhan sense--anything that effects some change, that extends some human capability in some way). ---Michael B.Anonymous
August 08, 2007
Navigation might be a complex task but it's quite deterministic. Software, is not, so the testing is different. Letting the same people do navigation that are intended to use software would lead to the loss of a lot of ships ;-) Keeping the non-deterministic nature of software in mind shows the disadvantage of scripted tests. They can't test in a non-deterministic way, but they can show that the single parts are returning the results they are planned to do. Regards, LotharAnonymous
August 08, 2007
Comparing navigation with testing : is part of the difference due to coastlines etc not changing much over the centuries, thus allowing considerable refinement of navigation processes & tools; whereas the software environment seems to change radically every decade or so. Thus by the time particular software processes & tools become (relatively) refined and productive they are also obsolete, as a new generation of techniques arise and people find new ways to create defects ! Is there also a cultural aspect to this : a navigation team have a big incentive to get their location correct; but what are the consequences if a software team produce something that is less than 'good enough' ?Anonymous
August 10, 2007
Michael: Thanks for the reframe! It's what I meant to say. <g/>Anonymous
August 10, 2007
The comment has been removed