Freigeben über


prevention v. cure (part 3)

Now that the testers are once again gainfully employed, what shall we do with them? Do we point them toward writing test automation or ask them to do manual testing?

First, let’s tackle the pros and cons of test automation. Automated testing carries both stigma and respect.

The stigma comes from the fact that tests are code and writing tests means that the tester is necessarily also a developer. Can a developer really be a good tester? Many can, many cannot but the fact that bugs in test automation are a regular occurrence means that they will spend significant time writing code, debugging it and rewriting it. One must wonder how much time they are spending thinking about testing the software as opposed to writing the test automation. It’s not hard to imagine a bias toward the latter.

The respect comes from the fact that automation is cool. One can write a single program that will execute an unlimited number of tests and find bugs while the tester sleeps. Automated tests can be run and then rerun when the application code has been churned or whenever a regression test is required. Wonderful! Outstanding! How we must worship this automation! If testers are judged based on the number of tests they run, automation will win every time. If they are based on the quality of tests they run, it’s a different matter altogether.

The kicker is that we’ve been automating for years, decades even and we still produce software that readily falls down when it gets on the desktop of a real user. Why? Because automation suffers from many of the same problems that other forms of developer testing suffers from: it’s run in a laboratory environment, not a real user environment, and we seldom risk automation working with real customer databases because automation is generally not very reliable (it is software after all and one must question how much it gets tested). Imagine automation that adds and deletes records of a database—what customer in their right mind would allow that automation anywhere near their database? And there is one Achilles heel of automated testing that no one has ever solved: the oracle problem.

The oracle problem is a nice name for one of the biggest challenges in testing: how do we know that the software did what it was supposed to do when we ran a given test case? Did it produce the right output? Did it do so without unwanted side effects? How can we be sure? Is there an oracle we can consult that will tell us—given a user environment, data configuration and input sequence—that the software performed exactly as it was designed to do? Given the reality of imperfect (or nonexistent) specs this just is not a reality for modern software testers.

Without an oracle, test automation can only find the most egregious of failures: crashes, hangs (maybe) and exceptions. And the fact that automation is itself software often means that the crash is in the test case and not in the software! Subtle and/or complex failures are missed in their entirety.

So where does that leave the tester? If a tester cannot rely on developer bug prevention or automation, where should she place her hope? The only answer can be in manual testing. That will be the topic of part four of this series.

Comments

  • Anonymous
    July 28, 2008
    PingBack from http://wordnew.acne-reveiw.info/?p=13947

  • Anonymous
    February 03, 2009
    [Nacsa Sándor, 2009. január 13. – február 3.]  A minőségbiztosítás kérdésköre szinte alig ismert

  • Anonymous
    February 08, 2009
    [ Nacsa Sándor , 2009. február 6.] Ez a Team System változat a webalkalmazások és –szolgáltatások teszteléséhez