Compartilhar via


Testing

Spec Explorer is a model-based testing (MBT) tool. For a general overview of MBT see Model-Based Testing. In the following topic, Spec Explorer's specific ways of performing MBT are described.

In Spec Explorer, testing of the system under test (SUT) is performed in one of three modes: Test Code Generated (TCG) testing, Dynamic Traversal (DT) testing, or On-The-Fly (OTF) testing. The mode is chosen and controlled with Spec Explorer switches (see SwitchClause). TCG testing is the default mode and was the only mode available in earlier releases of Spec Explorer. The three modes differ in the way the model is evaluated and the way tests are carried out.

TCG testing involves three distinct phases: exploration, test code generation, and test execution. TCG testing is sometimes called "offline" testing because the running of test cases is decoupled from the model evaluation that generated them. An entire test suite is generated from the full exploration result before the test cases in the suite can be run stand-alone. In other words, the test cases can be run offline from both the design-time process of model exploration and the subsequent test code generation from the exploration result.

DT testing is a special case of TCG testing. Rather than providing a complete set of executable test cases from the exploration result, the DT feature provides the transition system, method delegates, and a logging feature and allows the user to create their own dynamic traversal strategy by implementing the IDynamicTraversal interface. Spec Explorer provides a default class for DT testing, and a default base class to simplify user creation of the IDynamicTraversal interface. Note that DT contains only a single test method.

OTF testing seamlessly merges model evaluation and test activities. OTF testing is sometimes called "online" testing because the running of test cases is tightly coupled with model evaluation. Test steps are generated and run in lockstep with model evaluation. In other words, the steps of test cases are effectively being generated online at test runtime by a tightly-coupled process of model evaluation.

The following illustration summarizes the main parts and activities of Spec Explorer's MBT process. It assumes an existing SUT and can serve as a basis for comparing the two main modes of testing in Spec Explorer.

Spec Explorer Testing Overview

The following table compares the TCG testing and OTF testing modes in terms of the main parts of the testing process. DT testing is the same as TCG testing for purposes of the table below.

Part TCG Testing OTF Testing

Model

The model is the testable Cord machine defined using configs, behavior expressions, and construct model program rules. While the Cord machine is evaluated, optional exploration derives a directed graph structure that must be finite. This is not directly part of testing.

The model is the same as in TCG testing. Because exploration is not used in OTF testing, the machine does not need to have a finite exploration result, subject to some limits described in the following table entries.

Abstract Test

Abstract tests are the final exploration result from the Cord machine using construct test cases to generate test coverage derived from the initial model exploration. These need to be error free, in Test Normal Form, and with no symbolic values (unless used "experimentally").

Abstract tests are steps (input and output) selected directly by the OTF strategy from evaluation the chosen Cord machine without any derived exploration. These need not be but should be error free, need not be in Test Normal Form, and have no symbolic values (unless used "experimentally"). Steps per state must be finite with eager evaluation being used. The theoretical total number of states does not need to be finite with lazy evaluation being used.

Executable Test

Executable tests comprise the generated C# test suite created by TCG to be executed independently of the Spec Explorer tool. The generated test code contains the essential behavioral information (as steps) derived from the original model including both inputs to the SUT and their corresponding expected outputs. The set of expected outputs is sometimes called the "test oracle". The rest of the model's content, especially state values, are not embedded in the generated test code and thus the generated test code has no connection to the model.

Executable tests consist of input steps obtained from machine evaluation that are sent to the SUT by the Spec Explorer OTF test controller with the resulting SUT responses being compared to expected results in a test comparator. Only steps are controlled and observed by the OTF test engine. Thus, like TCG testing, the model and the SUT are never in direct contact and no model state is shared.

The following table compares the three modes in terms of the major activities of the testing process.

Activity TCG Testing DT Testing OTF Testing

Evaluate Model

(action machines)

Model exploration is decoupled from test execution. Complete state exploration is performed with a notion of state equivalence, and all possible actions (steps) are enumerated. The result of exploration must be a finite exploration graph (Finite State Machine). Bounds may be used to artificially force a finite exploration result but only for exploration not intended for testing. For exploration intended for testing, exploration covers all possible model behavior.

Model exploration is decoupled from test execution. All transitions are explored, but traversal is not performed. Other aspects of model evaluation remain the same as TCG testing.

Model evaluation is tightly coupled to test execution. Model state is evaluated lazily. One enabled action (step) is chosen at a time if there are any by the OTF strategy, which may be configurable. The actual results returned by the SUT drive selection of allowed observable actions. Thus the OTF test experiment creates a deterministic trace. Coverage of all possible model behaviors is not guaranteed but OTF model evaluation may examine the model's state space in various ways. OTF is especially useful for large, even effectively infinite, state spaces.

Derive Abstract Test

Traverse exploration result to generate abstract tests.

Same as TCG testing.

To generate an experiment trace, select one of the next actions (input or output) that the model predicts as possible.

Create Executable Test

Test code is generated for complete behavioral coverage of the model according to test strategy. Generated test code runs standalone (offline). Tests are structured into test suites that serve as containers of test cases. A complete test suite always fulfills the selected test-strategy guarantees. Test cases are simply artifacts of test code generation created as part of meeting the test strategy guarantees.

Test code is generated from the transition system. DT provides the same test coverage as TCG. However, when the transition system is completely covered, DT will start over and retest the transition system. This is useful for stress testing.

Each experiment expresses one instance of the chosen OTF strategy's notion of a single trace of interactions with the SUT constrained by what the model defines as allowable. Test runs are collections of experiments generated against the same model, and usually carried out in one continuous session. Experiments are not necessarily unique; there could be duplicates within a test run, although generally unlikely.

Run Conformance Test

Generated test code is run independently within its own test framework. TCG conformance test execution does not depend upon any runtime support from the model

Same as TCG.

No standalone test code is generated. Evaluation of the model alternates with interactions with the SUT. Controllable actions that are input to the SUT are chosen from those enabled within the model. To test the SUT's conformance to the model, observable actions output by the SUT are matched against those currently enabled in the model. Any input to the SUT is legal as far as the model predicts. But any corresponding output from the SUT must conform to what the model predicts for that output or conformance will fail.

Replay Test Result

Not currently available. Generally, TCG testing can achieve a similar purpose by rerunning the original test case (or complete test suite) without modification based upon prior test results.

Same as TCG.

A previous experiment may be used to guide OTF testing. Using the prior trace, the chosen replay strategy will attempt to recreate the trace according to its heuristics, the success of which is not guaranteed. Replay is just a special case of OTF testing with an additional input (the prior trace) and OTF strategies that take it into account. Additionally, in OTF replay, there can be an inconclusive test result.

TCG Testing

When test code is generated from a model, each step in the model will result either in calling a method and waiting for it to return, or waiting for an event to be raised by the implementation. This depends on whether the action invoked by the step was declared as a call-return or as an event, respectively. The method will be called on the declaring type provided in the action declaration, which can be the system being tested itself, or an adapter sitting between the test suite and the implementation.

For each event action, the implementation (or adapter) must provide an event variable with the same name and parameter types as the action. The test suite will construct an event handler and assign it using the += operator to the implementation event variable. When the implementation raises the event (by calling the event variable with specific arguments), the event handler provided by the test suite queues the event for eventual consumption by the test case. This allows events that occur at an earlier point in test case execution (for example, during a method call) to be consumed later.

Whenever the execution of a test case reaches a state in which the model expected one or more events to occur, it consumes the top element in the queue and compares it to the expected events. If it matches one of them, the test case proceeds to the target state of that step. If it does not, the test case fails. If the queue is empty and the model state had an outgoing call step, the test will wait for an event to be raised within the proceed-control timeout configured in a Cord switch and then invoke the call action on the implementation. If there is no outgoing call step and the queue continues to be empty after the configured quiescence timeout elapses, the test will succeed if the state was accepting, and fail otherwise.

Here is an illustration that shows an overview of TCG Testing.

TCG Testing Overview

Exploration is a systematic evaluation and analysis of the entire model. The logic of the conformance testing is fully expressed within the generated executable test cases. Thus, the tool that generated these tests is no longer needed to carry out the testing. Typically a test harness can control the testing process independently of the Spec Explorer tool.

DT Testing

Here is an illustration that shows an overview of DT testing. Note that the only difference between this diagram and the TCG diagram is in the Executable Test Cases block.

SE_DynamicTesting

DT testing works exactly the same as TCG testing with the exception that the transition system, rather than the full exploration result, is made available for test code generation. The DT system can then perform multiple passes through the Abstract Test Cases in a single testing session. DT provides a default traversal strategy, but the user can create and execute her own traversal strategy.

OTF Testing

Here is an illustration that shows an overview of OTF Testing.

OTF Testing

In OTF testing, all of the testing is done within the Spec Explorer tool. Rather than a complete exploration of the entire model, model evaluation and conformance testing are performed one step at a time for each action step of the model. The logic of the conformance testing is derived directly from the model. Thus, the Spec Explorer tool that generates these test steps is an integral part of the ongoing testing process.

OTF testing replay can lead to inconclusive conformance results. Normal conformance failure occurs when an observed SUT output does not conform to what the model predicts. For example, suppose the trace to be replayed had meaning during a logged-on state that entails a set of potential actions one of which is to edit. In this case, a logged-off state entails a set of potential actions which excludes an edit action. Then, if the replay is attempted during the logged-off state, the conformance test yields an inconclusive result because the two sets of potential actions do not match. An edit action could be performed in the original trace but cannot be performed in the replay of that trace.

See Also

Concepts

Model-Based Testing
Test Cases and Test Suites
Test Results and Conformance
Spec Explorer Concepts