Freigeben über


Did You? Did You Really? Loosely Coupled Comprehensive Verification

Verifying that a test case’s actions had the expected result is perhaps the most important part of testing. Every test case does something at least a little differently than every other test case, so the expected results are often a little different. These minute differences make it difficult to factor verification out to shared code and so verification code tends to be embedded in and duplicated across each test case.

Intermixing test case execution code with test case verification code further complicates matters. Initial state data necessarily must be gathered before individual operations are executed. Expected state can be calculated anytime after initial state is recorded to just before actual state is verified. Verification that actual state matches expected state must of course be done sometime after each operation is executed; often immediately after, if subsequent steps in the test case will destroy the current actual state. All of this makes it difficult to differentiate between execution code and verification code.

Separately, the set of properties that are typically verified is nowhere near the complete set that would be necessary for truly comprehensive verification (that is, verifying every property after every operation). The copious amount of work required to do so is generally deemed not worth the trouble. This is especially true since for any particular operation most properties will be unchanged. Experienced testers, though, will recognize that this is exactly how the most insidious bugs are manifest by changes in something that should be completely unaffected by the operation.

We have bypassed these problems by decoupling verification from execution. Loosely Coupled Comprehensive Verification is easy to explain and almost as easy to implement. Just before a test case or LFM method executes an operation, it notifies the Verification Manager that it is about to do so and also provides any relevant details. The test case or LFM method next executes the operation, and then finally it notifies the Verification Manager that it has completed the operation. That’s it as far as the test case or LFM method is concerned!

When Verification Manager is notified that something is about to happen it baselines current state and then works with a set of Expected State Generators to determine the expected state. Upon notification that the operation has completed Verification Manager compares actual state against expected state and logs any differences as failures.

This very loose coupling between verification and the rest of the system makes it very flexible. If the details regarding how a particular expected state is calculated change, the corresponding Expected State Generator is the only entity that has to change. Similarly, if the set of properties being verified changes nothing outside the verification subsystem needs to be modified.

Another benefit we get from this scheme is a dramatic reduction in follow-on failures – failures that occur solely because some previous action failed. Because we baseline expected state before every action it is always relative to the current state of the application, so a previous failure that has no effect on an action won’t fail that action just because the verification code expected that previous action to succeed. This eliminates “noise” failures and allows us to concentrate on the real problem.

Because verification details are decoupled from execution, the set of properties being verified can start small and expand over time. Helping this to happen is the ability to say “I don’t care” what happens to a particular property as a result of a particular operation. Any property with such a value is ignored when actual state is compared to expected state after the operation has completed. Once the expected result is known the tester simply updates the Expected State Generator appropriately and suddenly every test case automatically expects the new behavior.

Comments

  • Anonymous
    May 30, 2005
    Have you developed a framework and possibly an example of this?

    Thanks,

    adam at dymitruk dot com
  • Anonymous
    May 30, 2005
    The comment has been removed
  • Anonymous
    May 30, 2005
    This is all a little vague. I'm just a simple tester, so I don't deal well with that. Could you explain why this reduces follow-on failures? I would expect some kinds of test failures to result in polluting the machine, resulting in unreliable results in further actions. Or are you saying that you have a priori knowledge of which kinds of failures might be blocking failures for later test cases? Or maybe you're saying that the test cases actually exercise different paths in the dev code than you may have intended, but (even though you're not testing what you had meant to) you can tell that you got the correct result given the initial environment? If it's the latter it seems to me that you could report only one bug found by the test and inadvertantly hide other bugs until that initial blocking bug is fixed. Seems awfully dangerous. Especially near a big milestone (beta, RC, RTM, even an IDX or an RI). How do you deal with that problem?

    Re-reading what I just wrote, I hope I didn't come off as too confrontational. Not trying to attack you. Honestly just curious.
  • Anonymous
    June 01, 2005
    Adam: We are developing a framework for this, but nothing I can share just yet I'm afraid. I'll elaborate with some examples after I complete this series.
  • Anonymous
    June 01, 2005
    The comment has been removed
  • Anonymous
    June 01, 2005
    Drew: A typical scripted test case (for a drawing program like Microsoft Visio, say) goes something like this:
    1) Draw a rectangle. Verify the rectangle appears in the expected location.
    2) Set the rectangle's fill to be red. Verify its fill turns red.
    3) Move the rectangle. Verify its new location is correct. Verify it is still red.

    If in Step 2 the rectangle actually turns green, Step 2's verification will fail, but Step 3's will too even though the step itself succeeded. This is a follow-on failure. You have to take the time to look at this failure and determine that the step didn't really fail.

    With Loosely Coupled Comprehensive Verification, Step 2's verification will still fail. Because expected state is re-baselined before every step, however, Step 3 now (automatically, mind you) becomes "Move the rectangle. Verify its new location is correct. Verify it is still green." It passes - no follow-on failure this time!

    So there's no knowledge of what failures are blocking test cases, and no danger of missing missing bugs (none introduced by the use of this technique anyway). Just elimination of one problem keeping us from doing the best testing we can do!
  • Anonymous
    July 20, 2005
    Your Logical Functional Model lets you write test cases from your user's point of view, test cases that...
  • Anonymous
    July 20, 2005
    In many of my posts I have alluded to the automation stack my team is building, but I have not provided...
  • Anonymous
    August 03, 2005
    I think my team - much of Microsoft, in fact - is going about testing all wrong.
    My team has a mandate...
  • Anonymous
    August 03, 2005
    In many of my posts I have alluded to the automation stack my team is building, but I have not provided...
  • Anonymous
    March 13, 2007
    Michael Hunter is a well know tester both inside and outside of Microsoft. Michael writes a testing column