Software test automation is insurance?
I was listening to some other testers in Office the other day and heard the phrase "Automation is insurance: you never know how much you will need." Interesting comment which sparked a healthy discussion and got me thinking.
First, I decided that "some" automation fits this definition. Specifically, playback and record automation is what I'm thinking about. For instance, if I want to create an automation test for napkin math in OneNote, I might want to include a test for " 2+3=5 ". In order to enable the test, the code needs to work - in other words, the developer must implement the code such that when the script types "2+3=" and presses ENTER, OneNote responds with "5". So this script will never find a bug in OneNote unless someone changes the napkin math code and introduces a regression bug. All playback and record tests would never find a new bug. They would only find the type of bug that gets introduced and breaks something that used to work until that code change.
Playback and record type of automation is very useful – don’t think I’m speaking poorly of it. It not only will test OneNote code changes, but any new change on the system. For instance, a Windows service pack may change our sync mechanism, or a security patch might change the way we need to work with embedded files. Incredibly handy to verify all sorts of changes from any source that could potentially break us.
I liked the part about insurance and never knowing how much you will need. Just like you hope you will never need to make a claim on insurance, you hope you never have a script fail (you hope no one ever breaks anything that had been working fine). You also don't want to spend "too much" time and energy creating automation that will never find a bug. And just like insurance, you won't ever know if you have "too much," but will rapidly realize you have "too little" if you miss a critical bug.
Second, I want to exclude other classes of automation such as exploratory automation (which may find bugs that exist when new code is checked in - more on this here: Cem Kaner's take (PDF) or from BJ Rollison here), performance automation, security automation and so on. Your application can't be too fast or too secure :).
So the challenge here is knowing when you are comfortable with your automation suite. You need to be able to know you have enough. There is a lot more to be said about this topic. Let me know if you are interested.
Questions, comments, concerns and criticisms always welcome,
John
Comments
Anonymous
April 15, 2010
True, but, what should be the yardstick? Time? Code coverage? % of test matrix? Often I get the answer one would know with experience (as to what are problem areas, more likely to break etc.) Am not sure though.Anonymous
April 15, 2010
The comment has been removedAnonymous
April 15, 2010
Code coverage is its own topic. And this whole debate pre-supposes a stable automation system for scripts. We avoid most of the typical problems UI based automation hits in OneNote, so we have a good foundation on which to build. Not sure what you mean by "time." Can you clarify?Anonymous
April 15, 2010
Err, I understood this post to be examining when does a tester decide he has enough automation. :-/ If there is a stable foundation, then it is only a matter of time, right? By time, I mean the no. of days/weeks we have in hand to "complete testing" a feature - (assuming unit tests are already in place) estimating how much time we need to automate x% of tests, do we have enough time to test those that wouldn't be automated, so on. Am I over simplifying?