Jaa


Test Automation ROI (Part II)

Last week I talked about the silliness of wasting time calculating the return on investment (ROI) of an automation effort on any non-trivial software project; especially if it has an extended shelf-life. As my friend Joe Strazzre commented, “If you need an ROI analysis to convince business management that test automation is a good thing when used intelligently, than you have already lost.”

But, management might need to be educated on the limitations of record/playback, rudimentary hard-coded scripts and keyword driven automation efforts because these it is often more appealing for bean counters to invest in low cost tools and continue to rely on non-coding bug finders or domain experts to script out ‘tests’ which do nothing more than repeat some rote set of steps over and over again. But, as E.Dustin, T. Garrett, and B. Gauf wrote in Implementing Automated Software Testing any serious software automation effort “is software development.” Well designed automated tests requires highly skilled, technically competent, extremely creative, analytical testers capable of designing and developing automated tests using procedural programming paradigms.

We should still apply ROI concepts in test automation, but at a much lower level. Essentially, each tester must evaluate the return on investment of any test before automating it. The most fundamental purpose of an automated test effort is to provide some perceived value to the tester, the testing effort, and the organization. As a tester, the primary reason I automate a test is to:

  • Free up my time,
  • Re-verify baseline assessments (BVT/BAT, regression, acceptance test suites)
  • Increase test coverage (via increased breadth of data or state variability),
  • Accomplish tasks that can’t easily be achieved via manual testing approaches.

For example, the build verification and build acceptance test suites are baseline tests that must be ran on each new build; these tests should be 99.9% automated because they free up my time to design other tests. Tests that evaluate a non-trivial number of combinations or permutations are generally good candidates for automation because they increase test coverage. Performance, stress, load, and concurrency tests should be heavily automated because they are difficult to conduct manually.

It is important to note that I am not simply referring to UI type automation. A significant amount of “functional tests” designed to evaluate the computational logic of methods or functions can be automated below the UI layer in software architectures using OOP and procedural paradigms where the business and computational logic is separate from the UI layer.

There are many papers that discuss specific factors to take into consideration when deciding what tests to automate. Unfortunately, there is no single cookie-cutter approach in deciding what tests to automate. Different projects have different requirements and expectations, and, of course, not all tests are equal. One of the best papers I’ve read on deciding what tests to automate is When Should a Test Be Automated by Brian Marick. I like the simplicity in his 3 key questions:

  • How much more will this test cost to automate versus running it manually?
    Some people think that automating a test reduces costs because it eliminates the tester from manually executing that test. Unfortunately, this is not always the case. As i talked about in a previous post, visual comparative oracles are notoriously error prone requiring the testers to constantly massage the test code and manually verify results anyway. Sometimes paying a vendor to run a test periodically is cheaper than paying an SDET to tweak the test every build. But, if the population of potential test data is large, or combinatorial testing of non-trivial features then automating that test case is probably a good investment.
  • What is the potential lifetime of this automated test?
    How many times will this test  be re-ran during the development cycle and in maintenance or sustained engineering efforts? Can this test be reused in the next iteration of the product?
  • Does the automated test have some probability of exposing new issues?
    Although I don’t necessarily agree with this question because many automated tests may not expose new issues, but they still provide value to the overall testing effort. For example, I wouldn’t expect tests in my regression test suite to expose new defects because if they do there was a regression in the product. So, I would rephrase this question to ask, “Does this automated test have some probability of exposing new issues, providing additional information that increases confidence, or increases test coverage?

A few other questions I ask myself when I am deciding whether to automate a particular test include:

  • What exactly is being evaluated?
    This is perhaps the first question I ask myself. If the test is evaluating functional or non-functional (stress, perf, security, etc.) capabilities then automation may be worthwhile. But, behavioral tests such as usability tests and content testing are generally not good candidates for automated testing.
  • What is the reliability of automating this test?
    I don’t want to have to constantly massage a test in order to get it to run. So, what is the probability this test will throw a lot of false positives or false negatives? How much tweaking will this test require due to code or UI instability?
  • What are the oracles for this test and can they be automated?
    I don’t want to sit in front of a computer and watch software run software. Also, there is a difference between an automated test and a macro (A single, user-defined command that is part of an application and executes a series of commands). There are different types of oracles, and the professional test designer needs to also design the most effective oracle for the test. By the way…if the most effective oracle is a human reviewing the results then that test should probably not be automated using current technologies.

For each test I consider these questions in deciding whether to automate that test. For some tests, I may ask additional questions depending on the context and the business needs of my organization. I don’t use a cookie-cutter template, or try to fill out some spreadsheet to do a cost comparison based on dollar amounts. It’s hard to put a price on value. Instead, I ask myself a few key questions to help me decide if automating a test is worth it to me, my team, and the organization. Is automating a particular test the right thing to do or am I automating something because it’s challenging, or to increase some magical percentage of automated tests compared to all currently defined tests. The key message here is not to blindly automate everything; use your brain and make smart decisions about whether each test should be automated and being able to explain how automating that test benefits the testing effort.