다음을 통해 공유


prevention v. cure (part 5)

Ok, we're getting to the end of this thread and probably the part that most of you have asked about: exploratory testing, particularly how it is practiced at Microsoft.

We define four types of exploratory testing. This isn’t meant as a taxonomy, it’s simply for convenience, but it underscores that exploratory testers don’t just test, they plan, they analyze, they think and use any and all documentation and information at their disposal to make their testing as effective as possible.

Freestyle Exploratory Testing

Freestyle exploratory testing is ad hoc exploration of an application’s features in any order using any inputs without regard to what features have and have not been covered. Freestyle testing employs no rules or patterns, just do it. It’s unfortunate that many people think that all exploratory testing is freestyle, but that undersells the technique by a long shot as we’ll see in the following variations.

One might choose a freestyle test as a quick smoke test to see if any major crashes or bugs can be easily found or to gain some familiarity with an application before moving on to more sophisticated techniques. Clearly, not a lot of preparation goes into freestyle exploratory testing, nor should it. In fact, it’s far more ‘exploratory’ than it is ‘testing’ so expectations should be set accordingly.

There isn’t much experience or information needed to do freestyle exploratory testing. However, combined with the exploratory techniques below, it can become a very powerful tool.

Scenario-based Exploratory Testing

Traditional scenario-based testing involves a starting point of user stories or documented end-to-end scenarios that we expect our ultimate end user to perform. These scenarios can come from user research, data from prior versions of the application, and so forth, and are used as scripts to test the software. The added element of exploratory testing to traditional scenario testing widens the scope of the script to inject variation, investigation and alternative user paths.

An exploratory tester who uses a scenario as a guide will often pursue interesting alternative inputs or pursue some potential side effect that is not included in the script. However, the ultimate goal is to complete the scenario so these testing detours always end up back on the main user path documented in the script.

Strategy-based Exploratory Testing

If one combines the experience, skill and Jedi-like testing perception of the experienced and accomplished software tester with freestyle testing one ends up with this class of exploratory testing. It’s freestyle exploration but guided by known bug-finding techniques. Strategy-based exploratory testing takes all those written techniques (like boundary value analysis or combinatorial testing) and unwritten instinct (like the fact that exception handlers tend to be buggy) and uses this information to guide the hand of the tester.

These strategies are the key to being successful; the better the repertoire of testing knowledge, the more effective the testing. The strategies are based on accumulated knowledge about where bugs hide, how to combine inputs and data and which code paths commonly break. Strategic testing combines the experience of veteran testers with the free-range habits of the exploratory tester.

Feedback-based Exploratory Testing

This category of testing starts out freestyle but as soon as test history is built up, the tester uses that feedback to guide future exploration. “Coverage” is the canonical example. A tester consults coverage metrics (code coverage, UI coverage, feature coverage, input coverage or some combination thereof) and selects new tests that improve that coverage metric. Coverage is only one such place where feedback is drawn. We also look at code churn and bug density, among others.

I think of this as ‘last time testing’: the last time I visited this state of the application I applied that input, so next time I will choose another. Or, the last time I saw this UI control I exercised property A, this time I will exercise property B.

Tools are very valuable for feedback-based testing so that history can be stored, searched and acted upon in real time. Unfortunately, few such tools exist.

Comments

  • Anonymous
    August 18, 2008
    PingBack from http://blog.a-foton.ru/2008/08/prevention-v-cure-part-5/

  • Anonymous
    February 03, 2009
    [Nacsa Sándor, 2009. január 13. – február 3.]  A minőségbiztosítás kérdésköre szinte alig ismert

  • Anonymous
    February 05, 2009
    [ Nacsa Sándor , 2009. február 6.] Ez a Team System változat a webalkalmazások és –szolgáltatások teszteléséhez