Partilhar via


7: Testing in the Software Lifecycle

patterns & practices Developer Center

On this page: Download:
Process wars | Who creates system tests? | DevOps | Testing the business process | Operational monitoring | Shortening the devOps cycle | Updating existing code | Testing in the development cycle | Tests are executable requirements | Inception | Each sprint | Using plans and suites | Reports - Test plan progress report, User story test status, Test case readiness | Process improvements | What do we get by testing this way? | What to do next | Where to go for more information

Download PDF

Download code samples

Download Paperback

Testing is a vital part of software development, and it is important to start it as early as possible, and to make testing a part of the process of deciding requirements. To get the most useful perspective on your development project, it is worthwhile devoting some thought to the entire lifecycle including how feedback from users will influence the future of the application. The tools and techniques we've discussed in this book should help your team to be more responsive to changes without extra cost, despite the necessarily wide variety of different development processes. Nevertheless, new tools and process improvements should be adopted gradually, assessing the results after each step.

Testing is part of a lifecycle. The software development lifecycle is one in which you hear of a need, you write some code to fulfill it, and then you check to see whether you have pleased the stakeholders—the users, owners, and other people who have an interest in what the software does. Hopefully they like it, but would also like some additions or changes, so you update or augment your code; and so the cycle continues. This cycle might happen every few days, as it does in Fabrikam's ice cream vending project, or every few years, as it does in Contoso's carefully specified and tested healthcare support system.

JJ159342.83487B2A5803F6195B581CA6A96693A8(en-us,PandP.10).png

Software development lifecycle

Testing is a proxy for the customer. You could conceivably do your testing by releasing it into the wild and waiting for the complaints and compliments to come back. Some companies have been accused of having such a strategy as their business model even before it became fashionable. But on the whole, the books are better balanced by trying to make sure that the software will satisfy the customer before we hand it over.

We therefore design tests based on the stakeholders' needs, and run the tests before the product reaches the users. Preferably well before then, so as not to waste our time working on something that isn't going to do the job.

In this light, two important principles become clear:

  • Tests represent requirements. Whether you write user stories on sticky notes on the wall, or use cases in a big thick document, your tests should be derived from and linked to those requirements. And as we've said, devising tests is a good vehicle for discussing the requirements.
  • We're not done till the tests pass. The only useful measure of completion is when tests have been performed successfully.

Those principles apply no matter how you develop your software.

Process wars

Different teams have different processes, and often for good reasons. The software that controls a car's engine is critical to its users' safety and difficult to change in the field. By contrast, a vending site is less obviously safety-critical, and can be changed in hours. The critical software is developed with high ceremony—much auditing, many documents, many stages and roles—while the rapid-cycling software is developed by much less formal teams with less differentiated roles and an understandable abhorrence of documentation, specifications, and book chapters about process.

Nevertheless, development teams at different locations on this axis do have some common ground and can benefit from many of the same tools and practices.

Follow link to expand image

Follow link to expand image

Follow link to expand image

Who creates system tests?

There are two sides to the debate over whether system tests should be performed by the developers, or by a separate group of testers. As usual, the answer is somewhere in between the extremes, and depends on your circumstances.

In projects where a rapid overall cycle is essential, a separation between development and test is not favored. It introduces a handoff delay. Bugs aren't detected until the developers have moved on.

In support of separation, the most compelling argument is that testing and development require different skills and different attitudes. While developing, you think "How can I make this as useful as it can be for its users?" but in testing you think "I am going to break this!" Members of a development team—so the argument runs—tend to be too gentle with the product, moving it nicely along the happy paths; while the skilled tester will look for the vulnerabilities, setting up states and performing sequences of actions which the developers would find perverse in order to expose a crack in the armor.

The most versatile testers are those who can write code; just as the most versatile developers are those who can devise a good test. In Microsoft we have a job title, Software Development Engineer in Test. They are specialists whose skills go beyond simply trying out the software, and also beyond simply writing good code. A good SDET is able to create model-based tests; and has sufficient insight into the design of the system and its dependencies to be able to spot the likely sources of error.

Whether you have separate testers depends on the history of your team, and the nature of your project. We don't recommend separate development and test teams, unless you're writing a critical system. Even if different individuals focus on development and test, keep them on the same team. Move people occasionally between the two roles.

If you do have separate testers, bear in mind the following points:

  • Writing test cases is a crucial part of determining requirements. Make sure your test team members are key participants in every discussion of user stories (or other requirements). As we discussed in Chapter 4, "Manual System Tests," some teams practice acceptance test-driven development, in which the system tests are the only statement of the requirements.
  • Automated system tests are important tools for reducing regression bugs, and an important enabler for shortening your development cycles. Introduce a developer or two to the test team. Our experience is that this brings substantial improvements in many aspects of testing.

DevOps

An application spends most of the lifecycle deployed for customers to use it. Although the excitement and challenge in developing a software system is intense, from an operational perspective it's just part of the early lifecycle of the system. What the operational folks care more about is how well behaved the system is in actual operation.

One thing you can be certain of: No matter what you do, there will nearly always be another version, another release, another iteration. Your client, flush with their initial success, will stretch their ambition to selling crêpes as well as ice cream. Or they will discover they can't add flavors that contain accented characters (and consider this to be a bug, even though they never asked). And, yes, there will be bugs that find their way through to the working system.

By contrast, in a typical web sales application, the cycle should be very short, and updates are typically frequent. This is the sort of process Fabrikam had finely honed over the last few years. The time that elapses between a new business idea such as "let's let customers set up a weekly order" and implementing it in the software should be short because it's not only software developers who practice agility. A successful business performs continuous small improvements, trying out an idea, getting customer feedback, and making further changes in response.

JJ159342.4B696E6D6383D8538BA5A7684F13BD42(en-us,PandP.10).png

Continuous improvement

In the aircraft project, there are many iterations of development between each release of working software; in the web project, there are fewer—or maybe none. In continuous deployment, each feature released as soon as it is done.

The study of this cycle has come to be called "devOps." Like so many methodological insights, the point is not that we've just invented the devOps cycle; software systems have always had next versions and updates, whether the cycles take years or hours. But by making the observation, we can evaluate our objectives in terms of a perspective that doesn't terminate at the point of the software's delivery.

For example, we can think about the extent to which we are testing not just the software, but the business process that surrounds it; and what we can do to monitor the software while it is operational; and how we adopt software that is already in a devOps cycle.

Testing the business process

Don't forget that your application is just a part of something that your users are doing, and that their real requirements are about that process. Whether they are making friends, buying and delivering ice cream, or running an oil refinery, the most important tests are not about what's displayed on the screen, but about whether the friends are kept or offended, the ice cream is delivered or melted, and the oil cracked or sent up in flames.

Testing critical or embedded software involves setting up test harnesses that simulate the operation of the factory, aircraft, or phone (as examples). The harness takes the system through many sequences of operation and verifies that it remains within its correct operational envelope. This is outside the scope of this book, beyond noting that simulation isolates the system just as we discussed isolating units in Chapter 2, "Unit Testing: Testing the Inside."

For less critical systems, and tools that depend heavily on the behavior and preferences of their users, the only real way to test the process surrounding your application is to observe it in operation.

Operational monitoring

There are two aspects to operational monitoring: monitoring your system to see if it behaves as it should; and monitoring the users to see what they do with it. What you want to monitor depends on the system. A benefit of bringing the testing approach into the requirements discussions is a greater awareness of the need to design monitoring into the system from the start.

If your system is a website on Internet Information Services (IIS), then you can use the IntelliTrace feature of Visual Studio to log a trace of method calls and other events while it is running. For the default settings, the effect on performance is small, and you can use the trace to debug any problems that are seen in operation. To use this, download the standalone IntelliTrace Collector. You can use Visual Studio Ultimate to analyze the trace later.

Shortening the devOps cycle

Testing allows you to go around the cycle more rapidly. Instead of consulting the customers, you can push a button and run the tests.

JJ159342.18BC448A87D87EDFAB401DA2A3256347(en-us,PandP.10).png

Rapid devOps cycle with test as a proxy for stakeholders

Of course, tests are no real substitute for letting the clients try the software. But tests can do two things more effectively. Automated tests (and, to a lesser extent, scripted tests) can check very rapidly that nothing that used to work has stopped working. Manual tests—especially if performed by an experienced tester—can discover bugs that ordinary clients would take much longer to find, or which would be expensive if they appeared in live operation.

Feedback from tests and stakeholders reduces the risk that you are producing the wrong thing. To reduce the waste of going down wrong tracks, run tests as early as possible, and consult with your clients as frequently as possible.

JJ159342.BFA98124AAB31E60DAC6336A0E62E8E6(en-us,PandP.10).png

Updating existing code

If you're updating a system that exists already, we hope there are tests for it and that they all pass. If not, you need to write some. Write tests around the boundary of the part that you want to change. That is, write tests for the behavior that will not change, of the components that will change.

For example, if you're thinking of just replacing a single method with something that does the same job more efficiently, then write a unit test for the behavior that should be exhibited by both old and new methods. If, on the other hand, you're thinking of rewriting the whole system, write system tests for the system features that will not change.

The same approach applies whether you write coded tests or manual test steps.

Make sure the tests pass before you make the changes to the code. Then add tests for the new or improved behavior. After the updates to the code, both sets of tests should pass.

Testing in the development cycle

Methodologies vary, and your own project will be different, as we mentioned. But in any case, there is a role for testing in every phase of your project and throughout the lifetime of the application. Let's now take a look at those roles in at different stages in a project's life.

Tests are executable requirements

No matter what methodology you follow, it's a fundamental truth that the system tests are the requirements—user stories, product backlog items, use cases, whatever you call them, rendered into executable form.

The requirements might change from day to day. Quite likely, they will change when you ask about some fine detail of a test you are writing.

Therefore:

  • Create a test suite from each requirement. Create test cases to cover a representative sample of cases. In each test case, write the manual test steps or the automated test code to exercise one particular scenario. Parameterize it to allow a variety of inputs.

  • Cover both functional and quality of service(QoS) requirements. QoS requirements include security, performance, and robustness.

  • Include unstated requirements. Many of the QoS requirements are not explicitly discussed. Your job as a tester is to make sure these aspects are covered.

  • Discuss the test cases with the stakeholders. The relationship between requirements and tests is not one-directional. Your need to make precise tests helps to clarify the requirements, and feeds back into them.

  • Explore. Much of the system's behavior was not explicitly specified. Did the client actually state that they didn't want a picture of a whale displayed when the users buy an ice cream? No. Is it desirable behavior? Perhaps not. Perform exploratory testing to discover what the system does.

  • Automate important tests gradually. You need to repeat tests for later iterations, later releases, and after bug fixes or updates. By automating tests, you can perform them quickly as part of the daily build. In addition, they're repeatable—that is, they reliably produce the same results each time, unless something changes.

  • Don't delay testing. Test as soon as the feature is available. Plan to write features in small increments (of no more than a few days) so that the lag between development and testing is short.

  • System tests are the arbiter of "done." No one goes home until there are automatic or manual tests that cover all the requirements, and all the tests pass. This applies to each feature, to each cycle in your process, and to the whole project.

    ("No one goes home" is a metaphor. Do not actually lock your staff in; to do so may be contrary to fire regulations and other laws in your locality.)

Inception

At or near the inception of the project, sometimes called Sprint 0, you will typically need to:

  • Set up the test infrastructure. Build the machinery, the services, the service accounts, the permissions, and other headaches, source control, builds, labs, VM templates. See the Appendix, or, (even better) talk to someone who has done it before.

  • Make sure you know what you're doing. Get the team together and agree on the practices you're going to follow. Start with what everyone knows, and add one practice at a time. If all else fails, make them read this book, and Guckenheimer & Loje as well.

  • Create or import and run tests for existing code. If you are updating existing code, make sure you have the existing manual or automatic tests. If it was created via Visual Studio application lifecycle management processes, it will be easy.

  • Understand what the stakeholders want. Spend a lot of time with the clients and developers. Together with them, write down business goals and values; create slide shows of user scenarios; write down user stories; and draw business activity diagrams, models of business entities and relationships, and diagrams of interactions between the principal actors.

  • Understand the architecture. Spend time with the developers. Work out the principal components of the system, and the dependencies on external systems such as credit card authorities. Discuss how each component will be tested in isolation as it is developed. Understanding how the system is built tells you about its vulnerabilities. As a tester, you are looking for the loose connections and the soft bits.

  • Draft the plan.**The product backlog is the list of user stories, in the order in which they will be implemented. Each item is a requirement for which you will create a test suite. At the start of the project, the backlog items are broad, and the ordering is approximate. They are refined as the project progresses.

    Product backlog items (PBIs) are named and described in terms that are meaningful to stakeholders of the project, such as users and owners. They are not described in implementation terms. "As a customer I can order an ice-cream" is good; "A customer can create a record in the order database" is bad.

    Each item also has a rough estimate of its cost in terms of time and other resources. Remind team members that this should include the time taken to write and run unit tests and system tests. (And while you're there, mention that any guidance documents that might be required should be factored in to the cost as well, and that good technical writers don't come cheap. On the other hand, if user interfaces were always as good as they could be, help texts and their writers would arguably be redundant.)

Each sprint

A development plan is typically organized into iterations, mostly called sprints, even in teams where the Scrum methodology has not been adopted wholesale. Each sprint typically lasts a few weeks. At or near the beginning of each sprint, a set of requirements is picked from the top of the product backlog. Each item is discussed and clarified, and a collection of development tasks is created in Team Foundation Server. You create test suites for each requirement.

Early in the sprint, when the developers are all heads-down writing code, testers can:

  • Write test cases for system tests. Create test steps. If there are storyboard slide shows, write steps that work through these stories. Writing down the steps in advance is important: it avoids a bias towards what the system actually does.
  • Automate some of the manual tests from previous sprints. Automate and extend the important tests that you'll want to repeat. These tests are used to make sure features that worked before haven't been broken by recent changes to the code. Add these tests to the daily builds.

When the developers check in working features:

  • Perform exploratory tests of the new features. Exploratory testing is vital to get a feel for how the product works, and to find unexpected behaviors. From exploratory tests, you will usually decide to write some new test cases.
  • Perform the manual scripted test cases that you planned for the new requirements.
  • Log bugs. (Anyone can log bugs—testers, developers, product owners, and even technical writers.)
  • A bug should initially be considered part of what has to be done for the current iteration. If a bug turns out to need substantial work, discuss it with the team and create a product backlog item to fix it. Then you can put that into the usual process of assigning PBIs to iterations.

Towards the end of the sprint:

  • Run important manual tests from previous sprints that have not yet been automated, to make sure those features are still working.
  • Start automating the most important manual tests from this sprint.

At the end of the sprint:

  • System testing is the primary arbiter of "done" for the sprint. The sprint isn't complete until the test cases assigned to that sprint all pass.

JJ159342.184D214DF2D572A9902B2014255E3585(en-us,PandP.10).png

Test activities within sprints

Using plans and suites

A test plan represents a combination of test suites, test environment, and test configuration.

Create a new test plan for each iteration of your project. You can copy test suites from one to another.

If you want to run the same tests on different configurations, such as different web browsers, create a test plan for each configuration. Two or more plans can share tests, and again you can copy suites from one plan to another, changing only the configuration of the test environment in the Test Plan's properties.

JJ159342.B121A29F0F553743CC3D52BEDC50D59F(en-us,PandP.10).png

Test plans and suites within sprints

You can, in essence, branch your test plan by performing a clone operation on your test suites. The cloning operation allows your team to work on two different releases simultaneously.

Reports

The project website provides a number of graphical reports that are relevant to testing. The choice of reports available, and where you find them, depends on the Team Foundation Server project template that your team uses. See Process Guidance and Process Templates and its children on MSDN.

Test plan progress report

This graph summarizes the results of tests in a chosen test plan, over time. The graph relates to one iteration.

Worry if the green section is not increasing. It should be all green towards the end of the iteration.

The total number of test cases is represented by the total height of the graph. If your practice is to write a lot of test cases in advance, you should see a sharp rise followed by a relatively flat section.

JJ159342.49FCE56F3C70E87E29CFB8907EC3A471(en-us,PandP.10).png

Number of test points

User story test status

The user story test status report is a list of requirements that are scheduled to be implemented in the current iteration, showing the results of the tests linked to each requirement.

Worry if one of the requirements shows an unusually short line. The requirements should be roughly balanced in terms of how many tests they have—if not, break up the larger ones.

Worry if there is a large red section. By the time the requirements are manually tested, they should be mostly passing.

Worry if the chart isn't mostly green towards the end of the iteration.

JJ159342.2113E95D0128F1D30BBA548E11356C27(en-us,PandP.10).png

User story test status

Test case readiness

When you plan a test case, you can set a flag that says whether it is Planned or Ready. The purpose of this is simply to make it easy to see what stage the tests are in, if the practice on your team is to work out test case steps in advance.

At the start of an iteration, you create test cases from requirements, just with their titles. Then you spend some time working out the steps of the test cases. When the steps are worked out and the code is ready, the tests can be run. This chart shows the state of the test cases for the iteration.

On the other hand, if your practice is to generate most test cases from exploratory tests after the code is ready, you might find this chart less useful.

JJ159342.A6E1D3025C240FA1B4B59837B17A5B11(en-us,PandP.10).png

Test case readiness

Process improvements

To summarize the points we've discussed, here are some of the improvements Contoso could consider making to their development process. Any change should be made one step at a time, and assessed carefully before and after.

  • When you are discussing scenarios, user stories, or other requirements, think of them sometimes in terms of how you will test them. Tests are executable requirements. Discuss test cases with stakeholders as one way of clarifying the requirements.
  • Think about validating not only the system but the business process around it. How will you verify that the user really did get and enjoy the ice cream? How will you use the feedback you get?
  • Create requirement and test case work items in your team project, and write out the steps of some key test cases. This provides a clear objective for the development work.
  • Write unit tests and aim for code coverage of 70-80%. Try to write at least some of the tests before the application code. Use the tests as a means to think about and discuss what the code should do.
  • Use fakes to isolate units so that you can test them even if units they depend on aren't yet complete.
  • Set up continuous build-test runs, and switch on email alerts for failing tests. Fix build breaks as soon as they occur. Consider using gated check-ins, which keep the main source free of breaks.
  • Plan application development as an iterative process, so that you get basic demonstrable end-to-end functionality early on, and add demonstrable functionality as time goes on. This substantially reduces the risks of developing something that isn't what the users need, and of getting the architecture wrong. Unit tests make it safe to revisit code.
  • Set up virtual labs to perform system tests.
  • Run each test case as soon as you can. Whenever a build becomes available, perform exploratory testing to find bugs, and to define additional test cases.
  • Record your actions when you run manual test cases. In future runs, replay the steps. This allows you to run regression tests much more quickly.
  • Use the one-click bug report feature of MTM with environment snapshots, to make bugs easier to reproduce.
  • Automate key manual tests and incorporate them into fully automated builds. This provides much more confidence in the system's integrity as you develop it.
  • If your team doesn't distinguish the roles of test and development, consider identifying the people who are good at finding bugs, and get them to focus for some of the time on exploratory testing. Ask them also to think about test planning.
  • In teams that do separate the developers and testers, consider mixing them up a bit. Involve test leads in the requirements process, and bring developers in to help automate tests.

JJ159342.DB8C42254D36AA6421D7E6DAA6E1507A(en-us,PandP.10).png

Thinking of requirements in terms of tests helps to make the requirements more exact

What do we get by testing this way?

Faster response to requirements changes and bug reports.

  • Because you have automated many regression tests and can quickly replay others, testing is no longer an obstacle to releasing an update. Manual testing is only required for the new or fixed feature. With virtual lab environments, you can begin testing very quickly. So let's fix that back-button bug, run the regression tests overnight, and release in the morning.
  • Stakeholders can experiment and tune their systems. For example, the ice cream vendors need not speculate about whether customers would like to advertise their favorite flavors on a social networking site; they can try the idea and see whether it works.

Reduced costs through less waste.

  • Unit tests and automated system tests make it acceptable to revisit existing code because you can be confident that you will find any accidental disturbances of the functions you already have working. This means that instead of finishing each piece of code one at a time, you can develop a very basic working version of your system at an early stage. If your architecture doesn't work as well as you'd hoped, or if the users don't like it as well as they'd thought, you avoid spending time working on the wrong thing. Either you can adjust the project's direction, or, at worst cancel at an early stage.
  • Fewer "no repro" bugs. The action recording feature of Microsoft Test Manager automatically logs the steps you took in your bug report. Fewer arguments among team members.

Happier customers.

  • By getting regular feedback, you allow your users to try out the system and tune both their process and what they want of the system. The end product is more likely to fit their needs.
  • When your stakeholders ask for something different, or when they report a bug, you can improve or fix your system quickly.

Follow link to expand image

What to do next

We hope you've got some ideas from this book that will help you and your team develop software that satisfies your clients better and faster and with less pain to you. Maybe you're like Fabrikam and already have these practices and tools in daily use; maybe you feel your company is more like Contoso and you feel there are a number of improvements that can be made. Most likely, you're like us: as we look around our company, we can see some parts that have very sophisticated testing and release practices, and other parts that are still thinking about it.

Organizations learn and change more slowly than individuals. Partly this is because they need time to achieve consensus. Each person's vision of the way forward might be good and effective if implemented across the team, but chaos comes from everybody following their own path. Partly it's because, with substantial investments at stake, it's important to take one step at a time and assess the results of each step.

You must not only have a vision of where you want to get to, but also form an incremental plan for getting there, in which each step provides value in its own right.

In this book we've tried to address that need. The plan of the book suggests a possible route through the process of taking up the Visual Studio testing tools, and the adaptations in your process that you can achieve along the way.

In this book, we've advocated using the Visual Studio tools to automate your testing process more, and we've shown different ways you can do that. That should make it possible to do regression tests more effectively and repeatably, while freeing up people to do more exploratory testing. We've also recommended you bring testing forward in the process as far as you can to reduce risk, and to make defining tests part of determining what the stakeholders need. In consequence, you should be able to adopt a process that is more responsive to change and satisfies your clients better.

But whatever you do, take one step at a time! The risks inherent in change, and the fear of change, are mitigated by assessing each change as you go.

JJ159342.8A2F5BE06791695C53F21AB7CAD4ADB0(en-us,PandP.10).png

The above cartoon is reproduced with permission from the creator, Hans Bjordahl. See http://www.bugbash.net/ for more Bug Bash cartoons.

Where to go for more information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away.

Next Topic | Previous Topic | Home | Community

Last built: Aug 6, 2012