Testing Ideals
I have been thinking about what an ideal software development process might look like:
- The product team would understand exactly what their customers want to do and how their product could help their customers do so.
- Developers would understand exactly how to translate that vision into code.
- Developers would write code which does exactly what they want it to, code which has zero defects.
- Installing the application would work every time with zero problems.
- Once it is installed the application would update itself automatically and transparently; these updates would install with zero problems every time.
- The application would continually self-test itself and automatically report any failures to the product team.
- The application would guarantee zero loss of data or productivity due to failures on its part.
- Customers would have a trivial process for reporting issues and providing suggestions to the product team.
- Customers would allow the application to report failures to the product team, and report every issue they find and suggestion they have to the product team, and have zero qualms about doing so.
- The product team would have either a fix or a "Nope and here's why" back to the customer within some short timeframe after receiving the report.
Note that testing does not appear in this list. Testing is necessary today at least because:
- The product team does not understand exactly what their customers want to do.
- Developers do not understand exactly how to translate that vision into code.
- Developers do not write code which does exactly what they want it to.
- Developers do not write code with zero defects.
- Installing the application does not work every time.
- Applications do not update themselves transparently. (Yes, many applications update themselves. I haven't yet seen one which does so transparently, so that I do not realize it has done so.)
- Updates do not install correctly every time.
- Applications do not continually self-test themselves.
- Applications do not guarantee zero loss of data due to failures on their part.
- Applications do not guarantee zero loss of productivity due to failures on their part.
- Customers do not have a trivial process for reporting issues and providing suggestions to the product team.
- Many customers do not allow the application to report failures to the product team.
- Most customers do not report every issue they find and suggestion they have to the product team.
- Most customers do not have zero qualms about submitting crash reports (in part because the reports may contain personal and confidential information (like the full contents of that steamy love letter to your Significant Other)).
- Product teams do not have a fix back to the customer within a short timeframe after receiving a problem report.
I do not think this has to be so.
One way to fully test a software application, and by implication find every bug in that application, and so eventually have bug-free code, would be to build the tests into the application. (Thanks Roger for suggesting this.) This is the intent of Design By Contract: Pre- and postconditions are defined for every method, and every method checks its preconditions before it does anything else and checks its postconditions after it has done everything else. This could be extended outside individual methods by creating daemons which periodically verify the consistency and correctness of the system as a whole.
While this is a start it does not make the application fully tested. Incorrect functionality would not be found, as the pre- and postconditions and consistency checks would be verifying only what the developer thinks should happen, which will not necessarily match what the other members of the feature team - or the customer - thinks should happen. Nor would this catch performance issues, or security holes, or usability problems.
Many of these however could be found by design and code reviews. Static analysis tools like lint and FxCop and PreFast can find other types of errors. Dedicated application of root cause analysis, where the product team analyzes every issue and takes steps to eliminate its cause, could largely prevent these and other defects from ever recurring.
Even with all of that testing still seems necessary. Today we test our products in order to gather information our management uses to make business decisions about our product, such as whether to continue working on it or to ship it in its current state. One reason we do that is because undesirable business consequences tend to result when customers attempt to use software whose level of unfinishedness goes beyond their tolerance levels. One cause of this customer satisfaction is the loss of productivity they experience when they have to take time to redo work which was lost when the software crashes, when they have to take time to report problems, when they have to take time to install patches.
I believe this is achievable. For example, I have not yet lost data from Microsoft OneNote despite it crashing on occasion. Web controls tend to update themselves fairly transparently. Microsoft's Online Crash Analysis makes submitting crash data simple (although not transparent).
Some of the pieces are here at least in part. The others seem eminently feasible. What are you doing to help this ideal world come into existence?
*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.
Comments
Anonymous
August 22, 2007
PingBack from http://msdnrss.thecoderblogs.com/2007/08/22/testing-ideals/Anonymous
August 23, 2007
This post has caused me to really think about the psychological makeup of people in general. My opinion is that as long as people are involved in creating and using software, there will always be a need for software testers. One statement that you wrote, "One way to fully test a software application, and by implication find every bug in that application, and so eventually have bug-free code, would be to build the tests into the application." I was wondering who would be building the tests into the application? If it is the programmers, then my job is probably safe, because they are the same ones who wrote the program :) If it is another group, independent of the programmers that created the original software, then my job is still safe. In fact, there would probably be more defects in the product. Why do I say this? Because people are involved. There would be another group involved that has it's own agenda. Where people are involved, so are their egos.Anonymous
August 24, 2007
I wonder if asking for 'zero defects' in a complex system is too much ? Ok, at module level all unit tests should pass; but for a complete application I think I would rather have 'dependable' than 'perfect'. Like your experience of OneNote, it may not be perfect, but when it fails to do something (new) the previous state is recoverable and you can try again. In fact for a system (in the large sense of the word) there will probably always be cases where it does something that is unexpected (to some users at least). But does that really mean there is a defect ? It could be an obscure, but quite logical, consequence of other requirements on the system. Is the answer to architect the system so that its structure allows for (expects !) the presence of defects, but will still function to some degree, depending on what fails to behave as expected ?Anonymous
August 24, 2007
Michele: In retrospect I believe I over-stated things when I said building tests into the product would result in the product being fully tested. I agree that the additional code those tests would create seem likely to create additional defects.Anonymous
August 24, 2007
The comment has been removedAnonymous
August 25, 2007
Regarding this: "One way to fully test a software application . . . " That one, that solution, runs into two fallacies. 1 - That everything you need a chunk of software to do, can, in fact, be specified by tests that are "build into the application." For that to be true:
- Whatever form you choose for expressing these tests would have to be capable of completely representing the intention of the code it tested.
- That richness would have to pass successfully across the gap from test to application. As an example, take a look at some of the "Responsibility Driven Design" stuff, and consider how the richness and implications of class roles & responsibilities might be captured in something "integrated." It's at least hard, possibly worse than that.
- "Built in" suggests the expression and the interaction with the application would be automated in some way. If we could do this for the tests, why wouldn't we do it for the code? How is doing this for the tests a simpler problem than getting the code right? 2 - What if your intentions are wrong, vs. design or implementation? The best you can test with "self testing applications" is the intention and understanding you have at the time of development. As an example, current practice automated "unit tests" can, at most, capture the understanding and intention of the developer writing the class code. Writing unit can help refine and extend your intention. Even so, the best you can capture is the understanding and intention you've come up with. I think one of the biggest payoffs in testing comes from having informed, skillful, independent eyes looking at your application that aren't part of the system that made the application in the first place. Losing that perspective is the risk, or perhaps the cost, of tightly integrating testers with development. Integrating the team has payoffs. It isn't free.
Anonymous
August 29, 2007
Michael, Surprised to you know your views regarding ideal world of software. To me, a design contract (with clearly defined pre and post conditions) is a software "model". You test the product as per that model and declare that there no "bugs" (as per the model). What about behaviours that are outside the "contract" model - will be then saying - "anything are not covered by contract" are not important enough and will not be addressed? Your focus on Design, Code, contracts (requirements) is one side of the story. Other side that can really change the whole equation is "People". All Sophistacation that you bring in while the software gets envisioned, designed,built and tested - gets royally taken for a ride by people who use it. Can you get users to use the software as per the contract? I am sure we still have no one universal definition of Bug or Defect ... I think this is the important aspect of whole software world of "Defect or Bugs". To think of something defect free or Fault tolerant - we would first need to define what amounts to a defect or a fault. Then create solution around that definition of defect or bug. As long as you have human using the software - you will have tough job to define defect a bug. ShriniAnonymous
August 30, 2007
The comment has been removedAnonymous
August 30, 2007
Shrini: Thanks for your comments. Based on your and everybody else's comments I evidently did not make clear that I was musing on possibilities, not stating my beliefs. As you say, any test we write, whether it is embedded in the application or external to it, can only validate what we have thought to validate. My search for a quantum leap in my testing continues....Anonymous
September 01, 2007
I'm a little surprised by this post, Michael. I can't envision a way in which development process without testing would be ideal. I can imagine a scenario in which testing would reveal far less bad news that it does today, but that's not the same thing at all. I think you're talking about something other than testing as I see it. Testing to me is "questioning a product in order to evaluate it". I think what you're talking about is "the revelation of bad news"--but maybe you're talking about "testing that is necessary in an uncertain world"--but the world is always an uncertain place. Building tests into an application would not test it completely. Built-in tests might help to reduce the number of outright coding errors--but how would you know that the tests are any good without questioning (that is, testing) the tests? Besides, bugs are not merely the result of coding errors; they're result of some aspect of the product failing to achieve some quality criterion for some person. A bug is not an intrinsic aspect of a product; like "quality" or "purpose", it's a relationship between the product and some person. One man's bug is another man's feature; a product which seems swell today could have hideous bugs in it when the operating system changes (Vista, anyone?), and an "ideal" software development process wouldn't change any of that stuff. For people who believe in the "ideal", in software or any other kind of development, my current recommendation would be to read "Discussions of the Method" by Billy Vaughan Koen. The premise of the book is that we do our best to solve problems based on the resources (and money) that we have, the constraints that we face, the knowledge that we have as individuals and groups, and the problem that we're trying to solve. All of these things are heuristic--which means that they might be very good, but they'll never be ideal, because "ideal" changes with context. And that's okay; it's just the way the universe works. ---Michael B.Anonymous
September 03, 2007
Michael: When I said that embedding tests in software would make it fully tested, I misspoke. A more precise statement of what I meant to say is that embedding tests in the software, and having said software continuously run those tests in order to self-test itself, would ensure that the software was always functioning in the way the product team expected it to. (Assuming, as Jim pointed out, the tests were capable of completely representing the intention of the code.) Whether these expectations match those of the customer is a different matter entirely, as you and others say. I think my ideal software development world would be one where a person is always able to clearly state what they wanted a piece of software to do, implement the software exactly matching that statement, and easily modify the software as their needs changed and/or they discovered that they didn't quite want exactly what they thought they wanted. In such a world testing would not be necessary. Or maybe the entire process would be testing. In this post I was working towards the quantum leap search I blogged about a few weeks ago. This ideal may not be reachable, it doesn't need to be. Thinking about an ideal, even if it is unreachable, can illuminate ways to approach it. This one hasn't yet; I continue to ponder however.Anonymous
September 04, 2007
>I think my ideal software development world would be one where a person is always able to clearly state what they wanted a piece of software to do... But we can't do that, and I doubt that we ever will. Learning (not knowing) what we want is an iterative, open-ended process that includes analysis, exploring, discovery, learning, invention, refinement, synthesis, and questioning. As soon as the clock ticks, we're into another cycle of that. At least we revisit the questioning part--"Is it good enough? It is? Oh, okay."--and then the clock ticks again. >...implement the software exactly matching that statement... I can't, and won't describe everything I want such that I'm absolutely sure makes sense to you. That's governed by heuristics. So instead of "exactly", I'd suggest "sufficiently". Even if we got exact, we couldn't prove it exactly. But we could prove it--that is, test it--sufficiently to figure that we had got it sufficiently. >...and easily modify the software as their needs changed and/or they discovered that they didn't quite want exactly what they thought they wanted. The way I see it, the recognition and discovery of changing needs or desires is part of a testing process. >In such a world testing would not be necessary. Or maybe the entire process would be testing. I think it is more useful to consider the latter. If you're looking for that quantum leap in testing, adopting that latter point of view (all is testing; all is heuristic; all is testing our heuristics) would be a good springboard, in my view. Cheers, ---Michael B.