Quantum Testing
Once upon a time, I thought testing was about finding bugs.
Once upon a time, I thought I should be able to find every bug in my product.
Once upon a time, I thought testing was about ensuring no bugs escaped to customers.
Once upon a time, I thought every single test should be automated.
One day I stopped testing for awhile and thought about why I thought these things. I started by making a list of every reason I could think of to automate a test. It was a long list. Reviewing it, however, I realized every reason boiled down to one of two basic reasons:
- I wanted to be notified if a defect ever (re)occurred.
- Automating the test was faster than doing it manually.
This got me thinking about why I was testing in the first place. Soon I realized that I wasn't testing to find bugs - I was testing *because* defects had been found and the team wanted to know how many other defects were present.
Upon further consideration I realized that was not exactly correct. I had learned through experience that I would never find every defect. I had also learned through experience that my management did not expect me to find every defect.
So why was I testing?
Aha! I was testing so that I could provide my opinion as to whether the product was ready to ship or not!
Upon further consideration I realized that was not exactly correct. I had learned through experience that my opinion as to whether the product was ready to ship might be overruled by people up my management chain.
So why was I testing?
Several similar cycles later, I came to a conclusion:
My team is building a product. The team is composed of human beings. Human beings are fallible and make mistakes, thus the team is fallible and will make mistakes. Some of these mistakes will take the form of defects. Some of these defects will prevent our product from serving its intended purpose well enough to meet the business goals my team has for our product. I am testing in order to provide information regarding how well our product serves its intended purpose. This information is used by people up my management chain to decide whether shipping our product or taking additional time to refine our product will provide the most business value.
Once I spelled this out all sorts of things suddenly made sense. For example, "refining" might mean fixing defects. It might also mean adding additional features, or expanding existing features, or cutting completed features. Now I understood why each of these might occur a week before our scheduled ship date. Now I also understood why we might ship with what I considered heinous problems.
With this realization I started re-evaluating everything I did in terms of business value. My quest to reduce the cost of UI automation stemmed in part from this, because lowering that cost meant my team and I could complete more testing in a shorter amount of time and thus provide deeper information to the people up our management chain more quickly. And in fact that has turned out to be true.
Of late, however, I find myself thinking that continuing this quest may not be worth the investment. The changes we have wrought seem to me small, especially in the face of the exponentially exploding complexity of software today. I find myself questioning the business value of all the time I spend automating tests, and updating them to keep up with the product they are testing, and fixing defects in them and the infrastructure they use. This time seems to me better spent using my brain to identify the biggest risks to the business value my product is meant to create, working to prevent these risks from reifying, exploring my product in search of triggers for those risks, and - yes - crafting automated tests as seems appropriate.
Of late, however, I find myself questioning the business value of even this approach. I do not see how it can keep up with the exponentially exploding complexity of the software which will be here tomorrow. I feel as though there is a quantum leap I can make which will put myself ahead of this curve. I have not found it yet. I continue to search.
If you have any ideas how to find it, please let me know!
*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.
Comments
Anonymous
August 01, 2007
I think the quantum leap you're looking for lies with the developers. The only way to get ahead of the automation curve is to automate before development starts. Then we're getting into requirements based testing which I think is far more cost effective. The way I see things is that I used to look for bugs, now I look for ways of preventing them from being created. I realize that someone will always be looking for bugs but I think far too few people are looking at how to prevent them from happening.Anonymous
August 01, 2007
I've been thinking about this a lot lately myself. I've heard testing defined variously as "finding bugs, being a customer advocate," and "reporting on <i>QUALITY</i>," but it seems to be "whatever the rest of the people in your group think it is that you do." My way of thinking comes from a scientific point of view. I cannot "prove there are no bugs," because I can't prove a negative. I can, however, prove that under certain circumstances the product does what it is defined to do. Thinking carefully about what those circumstances are, and forcing the team to come to consensus about what the product is "supposed to do" are useful ways to spend time, and aren't necessarily done by anyone else. I feel like I could probably write a book about my thoughts here... but a couple of questions I feel that I should be able to answer better than anyone else on my team:
- As it exists right now, what are the best and worst parts of this product/feature from a user's point of view?
- What will cause this product/feature to fail?
Anonymous
August 01, 2007
>I feel as though there is a quantum leap I can make which will put myself ahead of this curve. The quantum leap comes, I think, when we recognize that we can't know everything, but that we can learn some things, and provide value by learning things that other people haven't learned yet. Ironically, I doubt that this learning is itself informed by quantum leaps, but more likely by steady progress--lots of tiny leaps, rather than one big one. I further don't think that it comes from increased automation per se, but rather from increased sapience (http://www.satisfice.com/blog/archives/99). ---Michael B.Anonymous
August 01, 2007
Jerrad: I think it is more than getting my developers testing. My developers do write unit tests. And still I feel this way...Anonymous
August 01, 2007
This: "My team is building a product. The team is composed of human beings. Human beings are fallible and make mistakes . . . " Exactly so. Of course, the biggest leverage is when you can change the practices and environment based on what you discover from testing. Otherwise you're stuck in a silly codependency. Folks making stuff mess up the same way. Folks finding stuff find the same stuff the same way. Wash, rinse, repeat. Boring. Also dumb. Any testing not also part of ongoing SPI is just silly. Doing so does lead to job security, however.Anonymous
August 02, 2007
The comment has been removedAnonymous
August 02, 2007
The comment has been removedAnonymous
August 02, 2007
I don't know that you can get there from here. Not consciously, at least. You spend time boiling your thoughts, ideas, methods down to their most elemental, which gets you down to (about) the atomic level. What you need is a way to go from atomic to subatomic. Find the protons, neutrons, and electrons that make up those elements. I don't know what they are, or how to get there. I'm just coming to terms with the value of testing in a test-less (not test-free) environment, so I haven't walked your path. But I see where it leads. Wish I could point you in the right direction. In the mean time, I'll be glad to follow.Anonymous
August 02, 2007
If you are a soldier is it your place to question the business value of taking any particular hill? Do you believe that you are (or should be) a soldier, and not a general? What is the rate of increase in defect costs compared to the rate of increase in software complexity? Relative to our competition? Relative to our ideals? What is the business value of each? What if we could reduce the cost of making mistakes? What if automation wrote itself (I'm not joking here), that is, if the product knew how it was supposed to work and could find its own defects? What would it take to make that vision a reality?Anonymous
August 02, 2007
I think the key is that you have to realize that you can't test properly because you are just one person. In your post you talk about how you feel your role is to actually evaluate whether a product is "good enough to ship", in other words your role is to ensure that your product is not slaughtered in the arena of public opinion. Instead of focusing on you trying to be some sort of clairvoyant that can divine what the customers like or dislike about your product, why not set up some sort of paid feedback focus group? Your role would then be marshaling that feedback into its proper container. I would then take my testing group and break them up into areas of responsibility: tester 1: Development bugs tester 2: Feature Requests tester 3: General feedback Bugs that the focus groups find could be send tester #1, who would then "own" that bug and work out it's priority based on effort to fix and business value. Sort of like a burn down list in SCRUM. As for the people in the focus groups, you could offer them features or areas to play with and a micro-site that they can use to submit feedback through. That way you can enforce people to look at a specific area of the site and you are not holding back development waiting for people in your focus group to look at a feature that was just developed. Hmmm... comment is getting kinda long. Maybe I'll blog about this or something.Anonymous
August 02, 2007
The comment has been removedAnonymous
August 02, 2007
Pete: Reducing the cost of making costs seems likely to help. TDD and Agile are aimed at this I think. Tests writing themselves is an interesting idea. Model-based testing is an early form of this. AsmL [http://research.microsoft.com/fse/asml/] is an attempt to formally define functionality. This would indeed be an interesting avenue to pursue.Anonymous
August 02, 2007
Matthew: Using our customers to do our testing is an interesting idea. This is sort of what beta testing is all about, although that feedback generally comes too late to do fundamental changes to the product. I have been on teams which have worked with a select set of customers, giving them early builds of our software and incorporating their suggestions into our product. Their involvement definitely made our product better. And I have never been on a product team that wasn't happy to incorporate customers' documents and such into their pool of test data. A similar interesting idea would be to allow customers to submit automated tests into a team's automation pool.Anonymous
August 03, 2007
I took some time and such considerations come into mind:
- Once i thought - software can and will do finite things (it's by digital nature of software and digital :) computers). So it can be completely tested and be bug free (finite states and finite paths of the software). So where is the problem, it just will take too much effort (in all means) to test all the states and paths.
- also this: Automate or not process of finding bugs also is buggy.
- "Exploding complexity" brings even more states and paths. It's thermodynamics law - entropy rises. This brings us to this - human beings are no way perfect (simply can't be), why software should? It's like evolution of human brain - of buggy, unstable blocks created something that is stable enough to perform certain tasks in certain circumstances.
- the only way is to make buggy software :) Actually term "buggy" is inappropriate in such environment. This is sounds very similar (at least by name) with that IBM research mentioned here earlier. Other companies and research teams working on it too.
- HOW it can be achieved? Or at least how it suppose to work? Here fantasy and brainstorm kicks in. It would be fun to take part of it. BUT some things are clear - it has very much different approach to requirements as is. Possibly end of requirements and specifications as we know them - like human - they had constraints, no requirements and it created mind and soul. Or may be no. The only quantum leap possible I think. All other routes are brute force infinite loop:
- new software should be more complex,
- developrs will have better (buggy) hardware and tools (also buggy). Add more developers and testers.
- will create even more bugs May another possibility is anthropological - software (and hardware too) will reach certain level of complexity and will STOP. It just will do everything what human beings will want. But this is quite a doomed scenario.
Anonymous
August 03, 2007
The comment has been removedAnonymous
August 03, 2007
Michele: Aren't Michael and RST grand? Thanks for reminding me about defocusing. I will ponder on that for awhile.Anonymous
August 05, 2007
Why don't we try to take some pointers from some other industry? From some industry where people find almost no defects. What helps them achieve that? I am an electronics engineer, and I have seen that almost all major electronics design revolves around testing. If you can't test it, you don't design it. As simple as that. Maybe we need a revolutionary change not just here in Microsoft, but in the field of software itself. I am fresh out of college and lack experience in both fields, but I can't help feeling we are missing something. Every field has to have quality control and assurance, and that is nothing but testing in some form or another... Why don't we learn from them? Any suggestions?Anonymous
August 06, 2007
Thayu: Learning from other industries is a grand idea. This is often a useful way to get past a block you are encountering. Do you know of an industry which has near-zero defects?Anonymous
August 06, 2007
The comment has been removedAnonymous
August 07, 2007
Thayu: I agree that one reason most software is rife with defects is that software makers do not have sufficient incentive to do otherwise. Thanks for the thoughts!Anonymous
August 07, 2007
The comment has been removedAnonymous
August 08, 2007
Once I was also like Thayu :) Nothing wrong with it, and some consideration points: Spaceships (and derivatives :) also are buggy :). My favorite story: http://en.wikipedia.org/wiki/Ariane_5_Flight_501 I still believe, that future software should ALLOW bugs, but still MUST be functional. It's nature of nature (sorry:) - nothing is prefect, but still everything works and looks VERY GOOD. Another favorite on this topic VERY FUNDAMENTAL NATURE LAW, which changed a lot in ourlife): http://en.wikipedia.org/wiki/Quantum_indeterminacy Making long story short - you can't simultaneously and precisely tell speed (energy) and position of a quantum particle. Precise can only one of those parameters not both. As opposite to Thay I want to tell - we (and our software) shouldn't bet perfect. Nobody is. Don't get me wrong - our software should be perfect on the task, including UI, reliability, gotchas etc. But internally it may be buggy (let's call it "imperfect"). All industries producing physical items have defects. Especially electronics. To remind those "Yield rate" - all the chips are tested and rated: bad, 2Ghz, 2.2 GHz, 2.3 GHz. Difference is that chips are very reliable indeed, if they passed are - Only inThere is no industry with zero defects. Also a product can be labeled "near zero defects" only if it has quite limited on/of type functionality which ALL can be verified (like mentioned computer chips), any other product have "defect tolerance" - and still providing value. Also I like this: http://en.wikipedia.org/wiki/Analog_computer#Timeline_of_analog_computers It may bring "other industries is a grand idea" too when investigated deep enough :) It's amazing how unstable things create very stable systems.Anonymous
August 08, 2007
The comment has been removedAnonymous
August 10, 2007
The comment has been removed