Automate This!
How much of your testing do you automate? How do you know whether you have automated enough - or too much?
My current team is taking the Automate Everything approach. This means we automate every last test case. "110% automation", as one of our executives liked to say. While certain tests may remain unautomated due to time constraints or extreme technical difficulties, we are working hard to keep that number lower than humanly possible.
Automating Everything has considerable payoffs: every test can be run on every build, ensuring our application does not regress in any way we have encountered thus far. Supporting additional operating systems (oh look - now there's a Windows XP Service Pack 42!) and languages (flash from Marketing: now we are localizing into Lower Elbonian!) requires nothing more than adding another row to the configuration matrix and kicking off another test run. Hot fixes can be tested without subjecting the test team to a fire drill. Pretty much nirvana, right?
Automating Everything has considerable downsides as well: Automated tests are by nature scripted, not exploratory. Even with an automation stack which injects all sorts of variability, the tests wear grooves in those areas of the product they cover and they ignore everything else. When something unexpected happens they are likely to die, and even if they are able to recover they are not able to stop what they were doing and investigate that unexpected happening. And don't forget the maintenance required to keep those tests running - which efforts are not helping you find defects in your application. Say, have you had time to actually use your application yet?
On the other extreme is the Automate Nothing approach. Here every test case is executed manually by a person physically using their mouse and keyboard. This has considerable payoffs: every test can be exploratory. The entire surface of the product will likely be covered. When something unexpected happens it is easily followed up. No maintenance is required to keep the test cases up to date with changes in the application. Everybody is always using the application. Pretty much nirvana, right?
Automating Nothing has considerable downsides as well: It is unlikely that every test will be run on every build (unless you only get builds every two weeks - in which case you have my sympathies!), so regressions may not be found until long after they are introduced, if they are found at all. Supporting an additional configuration means either running another full test pass or scoping down your testing and hoping you do not miss anything important - no economy of scale benefits here! Every hot fix requires yet another full test pass. Not to mention that it can be difficult for people to stay Brain Engaged when running a test for the tenth or twentieth or two hundredth time.
I struggle. The benefits of automating are clear to me. So are the downsides. Some tests - or parts of tests - are eminently automatable. Other tests are tedious or boring to do manually. Automated tests lend themselves to spitting out data in pretty graphs, which management generally likes. Session-Based Test Management seems an effective way to leverage testers' exploratory and critical thinking skills - to keep them Brain Engaged - while also giving management the data they require. I wonder however whether it scales to my context.
It is clear to me that Automating Everything is taking things too far. So is Automating Nothing. I have not yet found a balance I like. How about you?
Comments
Anonymous
April 11, 2007
It seems that the testing world keeps rehashing this problem of how much to automate. I think we're all focusing on the wrong part of the problem you outlined. The problem is that few companies / teams have enough resources to do both. Because then that would be nirvana. If I were CIO or CTO or C..whatever O. I would say that they goal of the "test automation" team is to automate everything. The goal of the exploratory tester team should be to "explore" 100% of the application. And the goal of the quality manager would be to get the most out of both teams effort and reduce duplication of work. As the skills of the workforce change and maybe if people become more intune with how agile development works we will be able to call the automation team developers and exploratory testers well... testers. Because testers does not mean button pusher it means someone who performs a test. Once we've done it 500 times its no longer a test it's a script. It seems this comment is too long and I should have just made a blog post about it :P I'll reserve that right for later.Anonymous
April 11, 2007
> If I were CIO or CTO or C..whatever O. I would say that they goal of the "test automation" team is to automate everything. The goal of the exploratory tester team should be to "explore" 100% of the application. And the goal of the quality manager would be to get the most out of both teams effort and reduce duplication of work. Well the question now becomes how do you decide what the right mix/ratio of people among both teams should be?Anonymous
April 11, 2007
The comment has been removedAnonymous
April 11, 2007
>>>Well the question now becomes how do you decide what the right mix/ratio of people among both teams should be? in an ideal world the development team is the test automation team. and the exploratory team is the team of integrated testers. distribution and problems attaining resources is a problem that resides in the project manager domain. We should leave that behind the great oz curtain and move on :)Anonymous
April 11, 2007
One other thing I think needs to be fixed about automated testing is that it needs to be easier for lazy developers to do right. The IDE increased developer effeciency by many orders of magnitude, when are we going to see the same thing for developer testing and testing in general? And seeing as I know I've said this somewhere before I'll just put a link here. http://jerradanderson.com/blog/index.php?/archives/61-Functional-test-tools-direction.htmlAnonymous
April 12, 2007
Hmmm....the perennial tester dilemma. I think the balance is highly situational and may vary from 20% to 100%. This is a combination of too many factors - is your app UI intensive, do you have resource/time crunch, are you going to run those on multiple platforms blah blah blah But what I don't appreciate is when the division passes mandates like X% automation compulsory!! And that too as a blanket rule for all releases - incremental or brand new! Guess you figured what I am saying ;-)Anonymous
April 16, 2007
The comment has been removedAnonymous
April 16, 2007
Interesting discussion, thanks for the post. Seems to me that you're missing "governance" when talking about test automation. In the SOA context governance adds a registry to announce that a servivce eixsts and a set of policies to control its use. Imagine a policy that says 'You must run a test script to check that this service is correctly configured before using this service." The governance tool manages the policies, stores the functional tests scripts, issues a call to run the test, and saves the results. Take a look at governance tools (BEA ALER, WebMethods X-Registry, Iona Repository, etc.) to get a better idea on how to mitigate the "test everywhere" mentality into a "test by policy" method. -Frank Cohen http://www.pushtotest.comAnonymous
April 16, 2007
Automation can be a great tool when applied wisely. I believe the trouble comes when we try to automate manual testing. Any automation that requires more work than manual testing to do less than a manual tester is of little value. I say let the thinking testers do what they best and let the computers do what they do best. Automation needs to be applied as a tool to assist testers, not replace them. There are some things that a computer can do faster (and without complaint) than a human tester. These are the things we should seek to automate. The automate everything crowd often fails to value skilled testers. This is often the same crowd that thinks we can easily outsource all testing (whether onshore or offshore).Anonymous
April 16, 2007
One more flaw in the "automate everything" or "automated 100%" is the assumption that we can "test everything" or "test 100%". Testing is a potentially infinite process. This requires that we decide what is most important to test. And we need to revisit this questions form time to time. What we decided was most important 4 releases ago may no longer be the most important. When asked to automate 100%, I ask "100% of what?" That usually doesn't go over very well. :) Placing our faith in automation can give us a false sense of security. People begin to believe that a passed automated test means a good quality product. Then they are surprised when a major flaw gets past the automated tests. This is made to be an even bigger problem when someone has told management that we have "100% of our tests automated."Anonymous
April 17, 2007
How about someone who believes that a passed manual test means a good quality product? I'm sorry, but this type of argument simply leads me to believe that anti-automation arguments are primarily driven by testers who fear replacement. Have you really met someone who would blindly go and automate every test, even manual tests, regardless of the time required and the payback received? I never have. Have you really met someone who would blindly say because our tests are automated we have a good quality product? I never have. Have you really met someone who, when they say they try to automate everything, really believes this means they've tested everything? I never have. The amusing thing is: every one of those objections could be reworded so they referred to manual testing instead. It's not about manual vs automated people! Automated testing is a means to an end, which is to allow us to run regression more often. This is something that manual tests DO have difficulty doing. It's not that automated tests are better than manual tests, but they do provide something that manual tests can't: continuous feedback. If you really meet someone, someday, who believes that automation in itself guarantees quality, then don't worry. Adding manual testing isn't going to help them anyway.Anonymous
April 17, 2007
The comment has been removedAnonymous
April 30, 2007
I heard this from Michael Bolton - Last year Mahantesh Ashok Pattan from India made the audience of QAI conference in Delhi go silent for a while by saying "My team has been successful in achieving 100% test automation" and then surprised the crowd by saying, "What I mean by that is, we achieved 100% of whatever we wanted to automate". Michael Bolton who was supposed to present next got impressed too about this guy and introduced him to me later. Today, Mahantesh works for Microsoft, Hyderabad building and heading a testing team. Setting a mission that is achievable based on the team's skills might be as important as automating something. James Bach's post "Manual Tests Cannot be Automated" is insightful enough to become situationally aware to answer people who demand automating everything.Anonymous
May 01, 2007
IF ( Testscript developement takes more time AND it is NOT regression ) THEN no automation IF ( Testscript development takes more time AND it regression ) THEN automate IF ( Testscript development takes less time and it is NOT regression ) Then automate