次の方法で共有


more about test case reuse

We mostly write test cases that are specifically tied to a single application. This shouldn’t come as any big surprise given that we’ve never expected test cases to have any value outside our immediate team. But if we want to complete the picture of reusable test cases that I painted in my last post we need to write test cases that can be applied to any number of different apps.

Instead of writing a test case for an application, we could move down a level and write them for features instead. There are any number of web applications, for example, that implement a shopping cart so test cases written for such a feature should be applicable to all such apps. The same can be said of many common features like connecting to a network, making SQL queries to a database, username and password authentication, and so forth. Feature-level test cases are far more reusable and transferable than application-specific test cases.

The more focused we make the scope of the test cases we write, the more general they become. Features are more focused than applications, functions and objects are more focused than features, controls and data types are more focused than functions and so forth. At a low enough level, we have what I like to call “atomic” test cases. A test atom is a test case that exists at the lowest possible level of abstraction. Perhaps you’d write a set of test cases that simply submits alphanumeric input into a text box control. It does one thing only and doesn’t try to be anything more. You may then replicate this test atom and modify it for different purposes. For example, if the alphanumeric string in question is intended to be a username, then a new test atom that encoded the structure of valid usernames would be refined from an existing atom. Over time thousands (and hopefully orders of magnitude more) of such test atoms would be collected.

Test atoms can be combined into test molecules. Two alphanumeric string atoms might be combined into a test molecule that tests a username and password dialog box. I can see cases where many independent test authors would build such molecules and then over time the best such molecule would win out and yet the alternatives would still be available. With the proper incentives, test case authors would build any number of molecules that could then be leased or purchased for reuse by application vendors that implement similar functionality.

At some point, enough test atoms and molecules would exist that the need to write new, custom tests would be minimal. I think that Wikipedia, a site with user supplied, policed and maintained content, would be what the industry would need to store all these tests. Perhaps such a community Testipedia can be constructed or companies can build their own internal Testipedias for sensitive applications. But a library of environment-carrying (see my last post) test atoms and molecules would have incredible value.

A valuable extension of this idea is to write atoms and molecules in such a way that they will understand whether or not they apply to an application. Imagine highlighting and then dragging a series of ten thousands tests onto an application and having the tests themselves figure out whether they apply to the application and then running themselves over and over within different environments and configurations.

Ah, but now I am just dreaming.

Comments

  • Anonymous
    January 22, 2009
    PingBack from http://windows7news.com.au/2009/01/23/more-about-test-case-reuse/

  • Anonymous
    January 23, 2009
    The comment has been removed

  • Anonymous
    January 23, 2009
    Wow, these sound like complicated test molecules - especially those that are expected to handle unknown input (future applications.)  How do we test the tests to make sure they properly apply themselves and then produce the correct result? (Free blog topic: How do we test tests?)

  • Anonymous
    January 23, 2009
    Thank you for submitting this cool story - Trackback from DotNetShoutout

  • Anonymous
    January 24, 2009
    The comment has been removed

  • Anonymous
    January 26, 2009
    I love the idea of test atoms and molecules. I think in mature platforms and products they have potential. The problem is that in order to trust the test code, you have to test it. Who tests the tests? And how?  To what standard do you hold them? Once you elevate your tests to this level, suddenly your test code needs to meet production guidelines. Once you make that requirement, you dramatically slow testing on the actual product you want to ship. This in practice is a non-starter unless you are actually shipping a test harness. I would propose reversing the thought train here. Rather than adding more elaborate test automation on the end of the product, push testability back into the product. The Zune bug that started this discussion has a root common to a lot of bugs. The developers thought something hard was easy. Date-Time functions are harder than they look. So rather than make sure that this particular version of the solution is correct, push to use a known and debugged version of the problem. Anytime developers work on date-time, security, sorting, search or anything else that’s proven to be problematic, push them to use a common library. If the library must be home grown, test the snot out of it as a standalone feature. Don’t try to test it all wrapped up in the whole cloth of the application. When you call the API or a well known library you insulate yourself from these sorts of low level bugs. The two (perceived) drawbacks of this approach are generally red-herrings. First is that you will just be shipping someone else’s bugs. To an extent this is true, but I like my odds with a Version 5 library that many, many people have used for years versus a brand new home brew solution. Second is that performance will be better with the home brew solution. Maybe it will be. Probably it won’t matter. I have seen more, harder to diagnose bugs, come out of a misguided attempt to make everything perform early in the ship cycle. Performance is important at ship time. Working, testable code is more important from day one. Put another way, it’s easier to make code faster than clearer. Testers are always thinking of great ways to do a better job. I think we err on the side of adding complexity too often. Well considered and balanced test automation can make or break a product. Too much is often more tempting and dangerous than too little.

  • Anonymous
    January 31, 2009
    The comment has been removed

  • Anonymous
    February 03, 2009
    [Nacsa Sándor, 2009. január 13. – február 3.]  A minőségbiztosítás kérdésköre szinte alig ismert

  • Anonymous
    February 08, 2009
    [ Nacsa Sándor , 2009. február 6.] Ez a Team System változat a webalkalmazások és –szolgáltatások teszteléséhez

  • Anonymous
    March 09, 2009
    Here you go. Someone has gone ahead and created the repository. http://testforge.net/wiki/TestForge Now, let's see if anyone fills it with useful test cases.

  • Anonymous
    April 17, 2009
    The idea of creating atomic test cases is really good. We can reduce a lot of test case development time using this approach. This is similar to the keyword driven test case design.A test designer can design the keywords given below for a login screen with username,password,ok and cancel button: - UserName_EnterData Password_EnterData Login_Click Cancel_Click We can combine these test cases to test multiple test scenarios. The test case can be used to test whether the users can login without entering password. The test case can be used to test whether the users can login without entering username. The test cases can be used to test whether the users can login after entering correct username and password. Please find more information on this technique at http://www.stickyminds.com/s.asp?F=S14393_ART_2

  • Anonymous
    August 11, 2010
    I think you'll find that the Axe tool from Odin Technology slots into 'Atoms' and 'Molecules' quite nicely.  Only rather than giving them abstract names, the tool calls them what they are - Subtests and Tests.  Lets not go down the road of re-inventing the wheel and calling it an orbital object!

  • Anonymous
    November 01, 2010
    The really good part of test portability is when a tool framework has been created that uses meta-programming to turn the tst spec into compile-able test-code (and binaries). The framework can generate code in .NET or C for instance, the devil is often in how you verify that the framework works, and we will have to see if it really takes too long to self-verify the framework and meta-programming it-self. I believe the benefits and tools to create these test-generator frameworks are pretty commonplace, and in use already in small ways. Not so sure we should be going orbital though :-)

  • Anonymous
    June 02, 2011
    I agree. It is a dream! Where am I?