Jaa


Programming Paradigms in Test Automation

Regardless of the personal opinions of a few people, the simple fact is that the demand for software testers who can design and develop effective test automation is increasing. Perhaps one reason for the distain by some folks in the industry is due to limitations of the test automation approach they are most familiar with, and they sometimes assume those limitations apply to all types of test automation. However, not all test automation approaches are equal, and there are advantages and disadvantages for any approach.

At its core an automated test case is software code, and similar to the various approaches used in developing product software there are different programming paradigms used to develop test automation such as:

  1. Record and playback automation
  2. Keyword or action-word driven automation
  3. Scripted automation
  4. Procedural automation
  5. Model based automation

 Record and playback automation

The record and playback paradigm simply records sequences of keyboard and mouse events, auto-magically codifies them usually into some proprietary scripting language which can then be replayed (executed) over and over again. There are usually severe limitations to this type of automation and it tends to be extremely fragile requiring constant massaging (re-recording). Although many record/playback paradigm allows 'test developers' to modify the scripted actions to some extent, and possibly even incorporate simple yes/no oracles I think many people view the record/playback paradigm as being slightly more useful than trained monkeys in limited situations.

Keyword or action-word driven automation

Keyword or action-words are simple scripts usually in some tabular format that 'describe' a sequence of 'actions' for the computer to perform. Of course, the key to keywords is the underlying architecture of the tool that interprets the keywords and executes the sequence of events. The beauty of keyword driven testing is that it hides the actual code, and similar to record and playback can be more easily used by business analysts or 'user domain experts' hired to into testing roles to automate something. I do see the benefit of keyword driven testing in some limited contexts (especially for companies who rely on business analysts/user domain experts for testing software), but let's be real…these people aren't automating anything…they are simply filling out a form that is then feed into a tool that performs the actions as prescribed by the listed instructions. The keyword form does nothing by itself, and the only thing a 'tester' has to think about is using the correct key words to sequentially get from point A to point Z for a 'test.'

Scripted automation (imperative programming)

The primary difference between keyword and scripted automation is the tester actually develops the test in a programming language rather than filling in a form with abstracted key words that drive some automation engine. However, similar to keywords, scripted automation tends to use rudimentary statements of basic instructions that manipulate the software to perform a pre-determined sequence of events as illustrated below.

    1: def test_b_googlenews
    2:   
    3:   #-------------------------------------------------------------------------
    4:   # Test to demonstrate WATIR select from drop-down box functionality
    5:   #
    6:   
    7:   #variables
    8:   test_site = 'http://news.google.com'
    9:  
   10:   puts '## Beginning of test: google news use drop-down box'
   11:   puts '  '
   12:  
   13:   puts 'Step 1: go to the google news site: news.google.com'
   14:   $browser.goto(test_site)
   15:   puts '  Action: entered ' + test_site + ' in the address bar.'
   16:  
   17:   puts 'Step 2: Select Canada from the Top Stories drop-down list'
   18:   $browser.select_list( :index , 1).select("Canada English")
   19:   puts '  Action: selected Canada from the drop-down list.'
   20:  
   21:   puts 'Step 3: click the "Go" button'
   22:   $browser.button(:caption, "Go").click
   23:   puts '  Action: clicked the Go button.'
   24:  
   25:   puts 'Expected Result: '
   26:   puts ' - The Google News Canada site should be displayed'
   27:  
   28:   puts 'Actual Result: Check that "Canada" appears on the page by using an assertion'
   29:   assert($browser.text.include?("Canada") )
   30:  
   31:   puts '  '
   32:   puts '## End of test: google news selection'
   33:  
   34: end # end of test_googlenews
   35:  
   36:  
   37: def test_c_googleradio
   38:   

Most examples of scripted automation appear as codified versions of a set of steps listed in a less-than-adequately designed manual test case using hard-coded arguments for variables, mindless progression between steps, and simple deterministic oracles. Scripted automation is probably most beneficial for automating specific sub-tasks in "computer assisted testing." However, scripted automation is usually too prescriptive, and rely heavily on nothing going wrong during the execution of the test case.

Procedural automation (procedural programming)

In procedural automation the testers also develops a test by writing a  series of computational steps to achieve a desired purpose. However unlike scripted automation the procedural automation paradigm generally provides better control flow options during the execution of the automated test case, allows for greater complexity in the design, improves reuse and reduces maintenance through modularity, and can employ both deterministic and heuristic oracles.

    1: // Procedural programming example 
    2:  
    3: static void Main(string[] args)
    4: {
    5:   string logResult = string.Empty;
    6:  
    7:   // Path to the data file passed as a string argument to the test case
    8:   string pictTestData = args[0];
    9:  
   10:   //Stopwatch to measure test case duration
   11:   Stopwatch sw = new Stopwatch();
   12:   sw.Start();
   13:  
   14:   // Launch the AUT
   15:   AutomationElement desktop = AutomationElement.RootElement;
   16:   AutomationElement myAutForm = null;
   17:   Process myProc = new Process();
   18:   myProc.StartInfo.FileName = myConstantAutFileName;
   19:   if (myProc.Start())
   20:   {
   21:     // Polling loop to find AUT window by window property
   22:     int pollCount = 0;
   23:     do
   24:     {
   25:       myAutForm = desktop.FindFirst(TreeScope.Children,
   26:         new PropertyCondition(AutomationElement.AutomationIdProperty,
   27:         myConstantAUTPropertyID));
   28:       pollCount++;
   29:       System.Threading.Thread.Sleep(100);
   30:     }
   31:     while (myAutForm == null && pollCount < 50);
   32:  
   33:     if (myAutForm == null)
   34:     {
   35:       throw new Exception("Failed to find dialog");
   36:     }
   37:  
   38:     // Get UI element collection here...
   39:  
   40:     // Call method to read in test data
   41:     string[] testData = ReadTabDelimitedFile(pictTestData);
   42:  
   43:     // iterate through each set of test data (data-driven test example)
   44:     foreach (string test in testData)
   45:     {
   46:       // Call method to execute each set of test data and assign the return
   47:       // value to the logResult variable; Oracle is separate method called 
   48:       // from the test method
   49:       LogResultMethod(ExecuteCombinatorialTestMethod(test));
   50:     }
   51:  
   52:     // close AUT and clean-up
   53:     TimeSpan ts1 = sw.Elapsed;
   54:     // log test case duration...
   55:  
   56:   // Deal with situation if AUT failed to launch
   57: }

Procedural automation can be used for anything from API to GUI automated test cases designed to evaluate functionality (computational logic), non-functional areas such as stress, performance, and security, and also behavioral  . Using a language similar to the programming language removes abstraction layers, and also enables other members of the team (developers) to easily review test cases.

Model based automation

Model based automation is a relatively new automation paradigm, and its complexity is beyond the scope of this single post. Basically, model based automation involves codifying abstracted state machines and state traversals and couples these parts with an automation engine that uses some form of graph traversal logic to drive the system under test between the various state machines identified in the model. In some sense model based automation is similar to exploratory testing because tests are generally not pre-determined or pre-scripted, what constitutes a single test is really hard to describe, and the oracles generally detect errant behavior (or being in an unexpected state). Personally, I think there is tremendous potential in model based automation, but the industry has just begun to scratch the surface of this automation paradigm and it is still largely misunderstood. This automation paradigm requires more complex skill sets of the person designing the test automation such as the ability to abstract important machine states as a model, and encode system behaviors. For more information about model based automation I recommend taking a look at http://research.microsoft.com/en-us/projects/specexplorer.

So, approach which is best?

In my opinion, there may be some limited value in record/playback, keyword, and scripted automation in specific contexts; however, a robust automated test case that will run on multiple environments, multiple languages, and distributed across multiple platforms without rewriting the test for each variation requires well designed tests developed using procedural automation or model based automation approach.

Comments

  • Anonymous
    May 13, 2009
    PingBack from http://microsoft-sharepoint.simplynetdev.com/programming-paradigms-in-test-automation/

  • Anonymous
    May 14, 2009
    "Although many record/playback tools allow 'test developers' to modify the scripted actions to some extent, and possibly even incorporate simple yes/no oracles I think many people view record/playback tools as being slightly more useful than trained monkeys in limited situations." There are very, very few tools that could be accurately categorized as only "record/playback TOOLS". Virtually all commercial (and some open-source) test automation tools include a "record/playback FEATURE". Despite your dismissal, like all features they have their uses and abuses. Many people view QAers in general as "slightly more useful than trained monkeys".  I think it's unfortunate to see someone like you use such derogatory terms.

  • Anonymous
    May 14, 2009
    Hi Joe, I agree there are a plethora of tools that include record/playback mechanisms. I should have not used the word 'tools' since I am speaking about the record/playback paradigm as an automation approach. (I have edited the post to remove the word tool and replace it with paradigm to clarify my thoughts.) I think you misread my statement. I did not infer that testers are slightly more useful than trained monkeys; my statement infered (mine and) the opinion of people I have spoken with whom have used a simple record/playback apporach to test automation view this automation paradigm as slightly more useful than trained monkeys. I think most testers quickly realize the limitations of the record/playback automation paradigm, and also understand the specific situations where it can add value to a project.  I also think that most professional testers realize that greater success of the record/playback automation paradigm requires the tester to modify the underlying code at least to some extent. But, I would say that modifying the underlying code base obtained from a record playback tool actually moves us closer towards the scripted automation paradigm.

  • Anonymous
    May 19, 2009
    "So, approach which is best?" There are many factors that should be considered before choosing the right approach. Here are a few:    * Analyze the application/product (Web, OS-Based, Technology...etc)    * Realize what to be tested and what not to be.    * Go through the requirements    * Separate the areas as per the modules.    * Analyze your customer/product needs and thus estimate the development activities. This gives you an idea of number of build releases and testing cycles required.    * Maintenance (Long-term/Short-term)    * Budget It is not always recommended to have a greater complexity in the design(i respect your thoughts too). But we cannot really get benefited designing and developing Next Generation Automation framework to test a tiny application :-)

  • Anonymous
    May 20, 2009
    The comment has been removed

  • Anonymous
    June 22, 2009
    Could you point us to resources for learning how to use procedural automation and/or the model based automation approach?

  • Anonymous
    June 23, 2009
    The comment has been removed

  • Anonymous
    September 05, 2009
    I've been working on adopting an "object oriented approach" towards test automation. My guess is that this approach falls somewhere between a procedural paradigm and a model based automation paradidm. We've achieved remarkable level of producivity with this approach. I've blogged about the approach here. http://elusivebug.blogspot.com/ Regards Rajesh

  • Anonymous
    September 05, 2009
    The comment has been removed