Partilhar via


Why My Test Bytes

While perusing some test code recently, I was thinking about data driven test automation. Data driven testing is a pretty simple concept, you write some test automation which takes input which can be configured to exercise a series of tests. So a test case in this example would be one of the data inputs and the test is the automation executing test cases.

Sounds simple and it usually is, but in this test, it was not very easy to distinguish between the test and the test case. In fact, the 2 concepts overlapped to the point where I began to wonder if the test code were just really bad. I hope not, it was something I developed over 7 years ago and had worked pretty well since then. In fact, the test was designed with that blurring intentionally added to the test in order to generate random inputs.

So why did I think it was so bad now? When my team and I analyzed (formally called root cause) the issue, it was pretty simple. There were 3 immediate things which stood out

- The feature the test was developed for had evolved and matured however, the test itself, had not undergone the same transformation. Mostly because the test was considered “good”, no one wanted to change it. Sound familiar?

- The original design blurred the lines between test case and test which in turn later made it difficult to separate the 2 into discrete functions. This blurring was good at the time, but as I noted before, it was not sustainable and what started out as a good idea, actually made the test scale poorly.

- The test had bugs. That should have been obvious by virtue of the fact we were trying to understand what we had to change. The bugs were in the design and in at least one case, the implementation. Test automation should not have bugs; that adds unnecessary risk to the feature it measures.

There are 2 lessons we took away from the examination of the test which we implemented.

The first lesson is this; if you are in test, you must be willing to make an investment in your tests to keep up with the feature. If not, you will end up with either no tests and high costs, or tests that do not work leading to even higher costs. And you must maintain that test, revising it, modifying it, and making sure it stays relevant to your product. One sure fire way to get that is to have the product or feature team review your tests on a regular basis – that means engineers, program managers, and your peer test development staff. Some of you will say “that’s our whole team” and my answer is that is my point. If you are schedule driven (you probably are) and quality driven (you need to be) then you should make sure that test review is part of the schedule and planning. You must also be willing to source that work, it does not come for free. That will give you some confidence that the tests are sustainable and really doing the right job.

The second lesson is questioning your test design from the beginning to the end. In this case, we did not give the test automation design (it was software) the same level of inspection that we did for the feature it was developed for. When the initial version was completed, there was so much celebration over its usefulness that it was just assumed to be good. In fact, because it was not data driven the way most tests are today, it did not scale. Adding a test case in most cases meant adding new code to the automation. In some cases, that could not be helped, in others, clearly, the test design impeded progress. That drives costs up, not down which is not the desired result.

To summarize, good data driven test design is the art of test automation. It is very difficult to implement scalable data driven test automation without introducing complexity to the test. However, if you are doing test automation and you want it to scale, drive test costs down, and be sustainable, this is unavoidable. Very often the test must and should be as sophisticated as the feature it tests. I would consider that a measure of how successful that test will be for you. Test driven development makes this easy and builds a high degree of quality and testability into your product or feature. That is an investment that your feature team should be willing to make on your behalf.

At any rate, we did not retire the test, as noted, it was really good at detecting the functional feature regressions and it had a really good much needed state machine. So we still use the test, but we also took the lessons we learned from this test to new automation which incorporated those lessons learned into the design. We also started making incremental changes to the feature to build in testability which in turn helps reduce overall test development costs.

As I write this, that is what quality software is about which is actually what product development is about.

To learn more about testing, test automation, and development for testing try these books

How we test software at Microsoft

Test Driven Development in Microsoft Dot Net

Testing Computer Software

Of course, the internet is everyone’s friend.

-Hazel

Technorati Tags: XPe,Standard 2009