Jaa


5: Automating System Tests

patterns & practices Developer Center

On this page: Download:
Who creates automated tests, and when? | How to automate a system test | Code the test method - Prerequisites, Generate the coded UI test, Edit the steps, Validate values, Data-driven tests, Further extensions, Using coded UI tests, Coding integration tests by hand | Link the test method to the test case | Create an environment for automated tests - Network isolated environments | Set a test plan to perform automated tests | Automated deployment to a lab environment - Automating deployment with Windows Installer, The lab build workflow, Writing a deployment script, Lab and server builds, What's in a build definition?, Identifying a server build definition, Creating a lab build-deploy-test definition, Automating deployment of ClickOnce applications | Using a coded UI test to run the installer | Viewing the test results and logging bugs | Driving tests with multiple clients | Summary | Differences between Visual Studio 2010 and Visual Studio 2012 | Where to go for more information

Download PDF

Download code samples

Download Paperback

Manual testing is the best way to find bugs in new code. But the trouble with manual tests is that they are slow, which makes them expensive to rerun. Even with the neat record/playback feature, someone has to sit at the screen and verify the results.

To make matters worse, as time goes on, you accumulate more functionality, so there are more features to test. You only have limited testing resources. Naturally, you only test the features that have most recently been developed. A graph over time in which test cases are listed on the vertical axis would look like this:

JJ159335.C673EBD30F2241FA7F6566FE0AA88699(en-us,PandP.10).png

Tests over time

If we follow a particular test case along, it comes into use when the corresponding backlog item is implemented, and maybe gets repeated a few times until the feature is right. The interesting question is, what happens to it after that? Does it just fall into disuse? Or does it get mothballed until just before the big release?

That might have been acceptable for very traditional project plans where they developed each component and then locked it down while moving on to others. But in the past few decades we've discovered that the best way to reduce the risk of a failed project is to plan by user-visible features so that we can get feedback from our clients as we go along. And in any case, even the most traditional projects get revised requirements sooner or later. The downside is that the development of each new user story (or change request) tends to revisit many parts of the code, so that already-developed features might be accidentally disturbed. So we must keep retesting.

Now, we've already said that one of the prime purposes of unit testing is to guard against regressions—to keep the code stable while we work on it. But unit tests aren't quite like system tests. They run on the build server and are not linked to requirements.

This is what automated system tests are for. Like unit tests, they run entirely automatically so that you can run them as often as you like. But they are test cases and can be linked to requirements, so you can always see an up-to-date chart on the project website representing which requirements have all their tests passing. And system tests run on a lab environment, to mimic operational conditions more closely, especially for distributed systems.

This means that on the graph, we can fill that triangle after the manual band with automated testing. The typical test case starts life as a manual test, and then is automated. Okay, you might not automate every test case; but if you can automate a good sample of smoke tests, you will substantially increase your confidence that all the stories that worked yesterday still work today.

Follow link to expand image

Who creates automated tests, and when?

Some teams have separate developers and testers. An issue that might concern them is that automated tests obviously involve coding, and not all their testers write much code. If we're suggesting automating the system tests, does that mean retraining all the test staff?

Firstly, let's reiterate that manual tests are the most effective way to find bugs in new code. Automated tests are, on the whole, for regression testing. An experienced tester can nose out a bug much more effectively than test code that simply exercises a predetermined scenario. (Yes, there are techniques like fuzz testing and model-based testing, but if you have the people and tools to do that, you probably don't have many non-coding testers.) So we'll always need manual testing.

Next point: We're about to look at tools that make it very easy to automate a manual test. The learning curve has a shallow end, and the slope is not steep, so you needn't get out of your depth. For the more advanced tests, testers and developers should work together. The tester says what needs to be verified and where.

Lastly, we do recommend that, if your test and development teams are separate, you should mix them up a bit. We have heard of startling improvements in test performance just by introducing one developer into a test team. Similarly, developers can learn a lot by having a tester close at hand. Developers tend to take a kindly view of their code, and don't like to see it badly treated. The tester's attitude to vulnerabilities can have the effect of toughening them up a bit.

Many companies make less distinction between testers and developers than they used to, and some make no formal distinction at all. A team naturally has different strengths, and you'll have some who are better at testing and some better at creating new features. But one skill informs the other, and a mixture of skills is beneficial, especially if it occurs within the same person. Indeed, the best test engineers are expert developers: they understand the architecture and its vulnerabilities. In Microsoft, the job of software development engineer in test (SDET) is well respected, and denotes someone who doesn't simply code, but can also devise sophisticated harnesses that shake the software to its roots. If you have a separate test team that doesn't do development, we recommend introducing at least one developer into the team; we know of cases where doing so has brought substantial improvements.

How to automate a system test

There are several aspects of automating a system test that are, to some extent, independent.

  • Code the test method. There are two different ways to create the test:
    • Coded UI test (CUIT). You record your actions as you work through a test manually. Then you use the CUIT tools to turn the recording into code.
    • Write an integration test manually. You write it exactly as you would a unit test, but instead of aiming to isolate a small piece of the system code, you exercise a complete feature. Normally in this method, you drive the layer just below the user interface.
  • Link the test method to a test case and thereby to requirements, enabling the result of the test to contribute to the charts of requirements status on the project website. You can do this from the associated automation page of the test case.
  • Set up a test plan for automated tests.****You specify what lab environment to use, which machine to run the tests on, what test data to collect, and you can also set timeouts. If you want to run the same set of tests on different configurations of machines, you can set up different test plans referencing different labs.
  • Define the build workflow. By contrast with unit tests, which typically run on the build server, a system test runs on a lab environment. To make it run there, we have to set up a workflow definition and some simple scripts.

When you're familiar with all the necessary parts, you might automate a test in the following order:

  1. Create a lab environment. For automated tests, this is typically a network-isolated environment, which allows more than one instance of the environment to exist at the same time.
  2. Set up a build-deploy-test workflow.
  3. Create the code for the tests. You might do this by running the test manually on the lab environment, and then converting it into code.

However, to make things simpler, we'll start by coding the tests, without thinking about the lab environment. Let's assume that we're coding the tests either for a desktop application, or for a web site that is already deployed somewhere. You can therefore do this coding on any convenient computer.

Code the test method

Prerequisites

On a suitable computer, which can be a virtual lab machine, you must have:

  • Visual Studio Professional or Ultimate in order to run Microsoft Test Manager (MTM).
  • Visual Studio Premium or Visual Studio Ultimate in order to convert tests into code.
  • The client application of the system under test. If you are testing a website, this will just be a web browser.

MTM and Visual Studio can be on different machines, but in that case the client application must be installed on both of them.

Unless the application is stand-alone, the server must be installed somewhere, such as a lab environment.

Generate the coded UI test

You can generate test code from a recording that was made during a manual test case. You can't automate an exploratory test, but you can create a test case with just one step, and record any actions you want in that step.

  1. Run the manual test, choosing to create an action recording. Save the test run.

    Play back the recording to make sure it works.

  2. In Visual Studio, create a Coded UI Test project in a separate solution from the system under test. You'll find the template under Visual Basic\Test or Visual C#\Test. In the dialog box, choose Use existing action recording.

    Or, if you already have a CUIT project, open CodedUITest*.cs, right-click anywhere in the code, and choose Generate Code for Coded UI Test, Use existing action recording.

    JJ159335.568D4871AC900E0A58958739C4C4CE33(en-us,PandP.10).png

    Generating a coded UI test

  3. Select the test case in the work item picker.

    New files and code are added to the project. In CodedUITest*.cs you will find code representing the steps of your test:

    [DataSource(...)]
    [TestMethod]
    public void CodedUITestMethod1()
    {
       this.UIMap.LaunchUI();
       this.UIMap.ClickEditOrderfromleftMenu();
       this.UIMap.EnterValidOrderIDOrderId();
       this.UIMap.ClickGetOrderbutton();
       this.UIMap.EditQuantityfrom2to1();
       this.UIMap.ClickUpdateButton();
    }
    

    Code for the individual steps has been added to UIMap.uitest.

  4. On the Unit Test menu, choose Run Unit Test, All Unit Tests.

    Note

    Do not touch the mouse or keyboard while the coded UI test is running. Allow a minute for it to start.

The test runs just as it did when you played it back in MTM.

Edit the steps

Tip

Use the UI builder tools to edit your coded UI test where possible. You can use the UIMap Builder to insert new material. Only the top level steps have to be edited in the actual source code. More sophisticated adaptations will require coding, but the basic steps can be created using the tools.

To rearrange or delete major steps, edit the content of CodedUITest*. These steps correspond to the steps of the test case.

To insert more major steps, place the cursor anywhere between one statement and the next, and on the shortcut menu, choose Generate Code for Coded UI Test, Use coded UI builder. The UIMap builder appears on at the bottom right of your screen:

JJ159335.A39F47B8C48B81586B20B64F42CF71C4(en-us,PandP.10).png

UIMap builder

Start your application and get it to the state you want before your new actions. Then press the record button (the one at the left of UIMap builder) and perform your new actions. When you're finished, press Pause, and then Generate Code (the button at the right). Close the UIMap builder, and the new code will appear in Visual Studio where you left the cursor.

To edit the actions inside any step, open UIMap.uitest. This file opens in a specialized editor, where you can delete or insert actions. These are the methods called from the main procedure. Notice that they are in no particular order.

JJ159335.D16F325FEEB1BD19D67049DE2F765D22(en-us,PandP.10).png

Edit actions

Note

Do not directly edit the code in UIMap.Designer.cs—your edits will be overwritten. Instead, use the CUIT Editor by opening UIMap.uitest; or write your methods in UIMap.cs, which will not be overwritten. There is a menu item for moving code into UIMap.cs.

To make a step optional—that is, to ignore the error that would normally be raised if the UI element cannot be found—select the step in UIMap.uitest, and then set the Continue on error property.

Validate values

In CodedUITest*.cs, place the cursor at the point where you want to verify a value. On the shortcut menu, choose Generate Code for Coded UI Test, Use existing action recording, use coded UI builder.

JJ159335.F253A4D2351D1C17BD8FFBEB160A233A(en-us,PandP.10).png

Validating test results

Open the application under test.

Drag from the crosshairs in the CUIT Builder to any field in the application window. The CUIT Properties window opens, where you can choose a property—typically Text. Choose Add Assertion and specify the value that it should take. Then in CUIT Builder choose Generate Code. A method is added to UIMap, and a call to it is added to your test code.

Also notice that AutomationId is set and can be used to search for the UI Element to be used in the CUIT.

Tip

Don't choose any of the Search properties.

Notice that you don't need to be very skilled in development to create a basic test that includes value validations.

Data-driven tests

You can make a test loop multiple times with different data. The easiest way to do this is to insert parameters in your test case before you record a run. Parameters are the feature of manual tests in which you can write a test step such as "Select a @flavor ice cream of size @size." While editing the test case, you fill in a table of values for your parameters, and when you run the test, Test Runner takes you through the test multiple times, once for each row of parameter values.

When you generate code from a parameterized test case, the code includes the parameter names.

You can later change the values in the parameter table. When you play back the actions, or when you run the code as an automated test, the new values will be used.

Note

Before you record the actions for a parameterized test, just provide one row of parameter values so that you will only have to work through the test once. When you have completed the manual run, generate code from it. Then write more values in the parameter table for the automated test.

If you would rather supply the test data in a spreadsheet, XML file, or database, you can edit the DataSource attribute that appears in front of your test. It is initially set to gather data from your parameter list:

[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", 
            "http://g4002-fabtfs:8080/tfs/defaultcollection;Commercial Stores", "12",
            DataAccessMethod.Sequential)]
[TestMethod]
public void CodedUITestMethod1()
{...

However, you can change it to use any other data source that you choose. For example, this attribute gets data from a comma-separated value (CSV) file:

[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\data.csv", "data#csv", 
            DataAccessMethod.Sequential), DeploymentItem("data.csv")]
[TestMethod]

The first line of the CSV file should be the comma-separated names of the parameters. Each following line should be comma-separated values for each parameter. Add the file to your project, and set the Copy Always property of the file in Solution Explorer.

Further extensions

CUITs provide a very easy entry point for creating automated tests. However, there are several ways in which you can write code to adapt and extend the basic recording. You also have to write code to handle more complex user interfaces.

If you add methods to the project, write them in separate files. Don't modify UIMap.Designer.cs, because this file is regenerated when you use the UI builder, and your changes will be lost. If you want to edit a method that you initially created using the UI builder, move the method to UIMap.cs and edit it there.

Here are some of the things you can do with hand-written code. The details are on MSDN under Testing the User Interface:

  • Data-driven tests. Loop through your test multiple times with data taken from a spreadsheet or database.
  • Write code to drive the interface and keyboard. The UIMap provides easy-to-use functions such as Mouse.Click(), Keyboard.Send(), GetProperty().
  • Wait for events such as controls appearing or taking on specified values. This is useful if the application has background threads that can update the user interface.
  • Extend the CUIT recorder to interpret gestures for new UI elements.

Follow link to expand image

Using coded UI tests

Tips for using CUITs:

  • Don't touch the keyboard or mouse while a CUIT is playing.
  • Use the CUIT Builder and Coded UI Test editor, rather than updating the code directly.
  • Create separate methods for separate dialogs.
  • Use the Split function in the CUIT editor to refactor a long method into two.

Note

For CUITs to work effectively and to be robust against changes, each UI element has to have a unique Automation ID. In HTML, each interaction element must have an ID. Then, you can change the layout and appearance of the application and the CUITs will still work.

Coding integration tests by hand

An alternative way to automate an existing manual test is to write by hand some code that works through the same sequence as the manual steps. You can then associate it to the test case.

Typically, you would write such a test as an integration test. Instead of driving the user interface, your test would use the business logic that is normally driven by the UI.

You create these tests exactly as you would a unit test. If there is a public API to the business logic, create the test in a solution separate from the system under test.

Advantages and disadvantages of integration tests are that:

  • They work in cases where you can't record a CUIT.
  • They are more robust than CUITs against changes in the UI.
  • They require a clean separation between business logic and UI, which is good.
  • They don't test any validation or constraint logic that the UI might contain, and which gives the user immediate feedback.
    You could therefore miss a bug that results from the UI allowing an input that the business logic doesn't expect. This is perhaps the strongest reason for preferring CUITs where they are feasible.
  • It takes longer to code a test than to record the actions of a CUIT. However, since you're likely to want to do some work on the code of the CUIT to generalize it, you might find that this difference becomes less significant.
  • There is no guarantee that the method you write tests the same thing as the corresponding manual test steps (if there are any). You could create a test case that fails when performed manually and passes when executed automatically.

Now that you have a working test method, you can promote it to be the test method that automates a test case. If you derived the test method from the steps of a recorded test case, then that is the test case to link to.

Test cases are usually linked to requirements. Linking a test method to a test case allows the results of the test to contribute to the report of test results for each requirement.

  1. Check in the test method.

  2. Link the test case with the test method:

    Open the test case in Team Explorer. On the Associated Automation tab, choose the ellipsis button and select the test method. The Automation status will change to Automated.

    JJ159335.8BA241F3257E94FC9A314D398EF5035E(en-us,PandP.10).png

  3. In Microsoft Test Manager, make sure that the test case appears in a test suite, such as a requirement-based test suite.

To create linked test cases for a batch of test methods, there is a command-line tool available on MSDN, tcm.exe.

Create an environment for automated tests

When you set up a build workflow, you will specify a particular lab environment on which it will run.

We discussed how to set up a lab environment in Chapter 3,"Lab Environments," but there are some specific points you need to know for automated tests.

  • Use an SCVMM environment so that you can take a snapshot and revert to it before starting each test.
  • You need a machine for each component of your system, including the client machines. For example, the Fabrikam Ice Cream website has: a client machine with a web browser; a machine for the web server on which Windows 2008 is installed with Internet Information Services (IIS) enabled; a database server; and a client machine for order fulfillment.
  • In the New Environment wizard, on the Advanced tab, check Configure environment to run UI Tests and choose the client machine. This makes the test agent run as a desktop application. Before tests start, it logs into the machine with the credentials you supply.
  • Take a snapshot of the environment. Note its name: in the build definition; you will specify that the tests should start by reverting to this snapshot.
  • You have to designate this environment as the one on which your test plan will run. Select the environment in Test Plan > Properties > Automated Test Settings.

Network isolated environments

If you will be running several tests of the same type, it is worth a bit of extra effort to set up a network isolated environment (NIE). An NIE allows you to deploy more than one copy of it from the library without experiencing naming conflicts. The NIE has its own internal private network: the machines can see each other, but they cannot be seen from the external network.

Another advantage is that pre-installed software is not affected by deployment because the machine identities are not changed when they are deployed. This avoids having to reconfigure software before using a copy of the environment.

Lastly, bugs can be easier to trace because the machine names are always the same.

You have to do a little bit of extra work to set up network isolation. We'll summarize here, and you can find more details in the MSDN topic How to: Create and Use a Network Isolated Environment.

The additional points to note are:

  • You need an additional server machine that you configure as a domain name server. Log into it and enable the Domain Controller role of Windows Server.
  • Set the Enable Network Isolation option in the environment wizard.
  • When you have started the environment, log into each machine and join each machine to the private domain provided by the domain controller.
  • Stop the environment and store it in the library.

Set a test plan to perform automated tests

Set up one or more test plans to execute the tests. The test plan determines which lab environment will be used.

In Microsoft Test Manager, choose Plan, Properties, and then under Automated runs, choose a test environment.

JJ159335.2E37864566AAD0D78B612285034BAAF5(en-us,PandP.10).png

The test settings selection within the test plan properties

You can usually leave Test Settings at Default. You can use different settings to: filter the list of environments to those with a specific set of roles; vary the data collected from each machine; have additional data and scripts downloaded to the test machines before the tests begin; set timeouts.

Automated deployment to a lab environment

It's good to install your system the same way the users will, to make sure their experience is satisfactory. If your system has different components on different machines, then you'll typically ask them to log in to each machine and run an installer there.

To get your installation fully automated, you need to get two things working properly: the creation of the setup files when the source code is built and the running of the setup file on the target machine in the lab environment.

When you set up a build-deploy-test workflow, the test controller and the test agents in the lab machines will do one part of the setup task for you. On each lab machine, it will run a script that you specify. Your script then has to do whatever is necessary to copy the setup files onto that machine, and run the setup there.

In Visual Studio 2012, there are several deployment mechanisms. The principal ones are:

  • Windows Installer. To create the familiar setup and .msi files, add a setup project to your solution. The project template is under Other Project Types. In the project properties, specify which of the solution's projects and files you want to include in the installation. When you build the project—either on the development machine or in the build server—the setup files are created in the project's output folder.

    If your system has components that are installed on different machines, then you will create one setup project for each component. Each installer can include assemblies built in more than one project.

  • ClickOnce deployment makes it particularly easy to deploy updates to a client or desktop application. You click one button in Visual Studio to deploy it to a specified public or intranet location. Every time the user starts the application, it looks in that location for updates.

    ClickOnce applications have some limitations. There are additional security checks to allow them to access files. The dialogs in the update mechanism make ClickOnce more appropriate for user than service software. Your application has to be defined in a single project.

  • Website publication allows you to upload a website to a web server by using an agent that you can add into IIS. You can also upload a database at the same time.

See the MSDN topic Choosing a Deployment Strategy for a detailed comparison.

Let's consider how to configure an automated build that uses the first two mechanisms. (Websites can be installed using a setup project, so we won't consider website publication.)

Automating deployment with Windows Installer

The lab build workflow

JJ159335.D0375BDFDA7755CDC344E72B7095A6FF(en-us,PandP.10).png

Lab deployment using Windows Installer

The diagram shows the flow of code from the developer's machine to the lab machines in the lab environment.

The solution includes a setup project, which generates an .msi file and setup.exe. The setup project is checked in along with the test code and application code. Any build on the build server creates the same files in the same relative locations, but in a drop folder.

The lab build then goes on to install the assemblies on the test machines and runs the tests there. This is a step beyond the kind of build we met in Chapter 2, "Unit Testing: Testing the Inside," where the tests usually ran on the build server.

Writing a deployment script

When you define the lab build, you will specify scripts that deploy the installers on the lab machines. You need a separate script for each lab machine. It's best to write and debug the scripts before you create the build definition.

Here is a typical script:

rem %1 is the build location – for example \\buildMachine\drops\latest.
mkdir c:\setup
rem Replace OrderFullfillmentSetup for each test machine:
xcopy /E /Q %1\OrderFullfillmentSetup\* \* c:\setup
cd c:\setup
rem Install quietly:
setup.exe /S /v/qn

The script expects one parameter, which is the UNC path of the drop folder; that is, the location on the build server where the setup file will be deposited. This particular script will get the setup files from the OrderFullfillmentSetup project; each test machine should look in the subfolder for its corresponding setup project.

The parameters to setup.exe on the last line encourage it not to pop up dialogs that would prevent the installation from continuing.

Test your deployment script. Run a server build manually, and then log in to one of your test machines. Copy the script there, and call it, providing the build drop location as a parameter.

Don't forget you need a separate script for each machine.

Place your deployment scripts in your Visual Studio solution. Create a new project called Deploy. Add the deployment scripts to this project as text files. Make sure that each file has the Copy To Output Directory property set to Copy Always.

Lab and server builds

To compile and run checked-in code, you create a build definition. You can see your project's build definitions under the Builds node in Team Explorer. There you can run a build manually, and open a report of the results of its most recent run.

There are two main kinds of build. Build definitions are created from build definition templates, and there are two template provided out of the box. The template that you get by default is helpfully named the Default template, and when you define a build with it, the tests usually run on the build server. We will call this kind a server build:

JJ159335.DDF44559C52FDC5DB0AC9C0B19F43482(en-us,PandP.10).png

Server build runs tests on the build machine

The other kind of build is created from the Lab Default template. In this kind of build, the build server is used only to compile the source code. If the compilation is successful, the components are then installed and run on machines in a lab environment. Deployment on each machine is controlled by scripts that you write. This enables the complete build-deploy-test scenario.

JJ159335.ABAB9228B2B5F282FFBFD1CDB995D82B(en-us,PandP.10).png

Lab build definition runs tests on lab machines

Builds defined with the Lab Default template are more like workflows. They do the following:

  1. Build the code for the application, tests, and setup on the build server. To do this, the build invokes a default build definition.
  2. Execute a script of your choice on each test machine in the lab. You would normally provide a script that copies the setup files to the test machine and executes them there. (But you can provide a script that does anything.)
  3. Invoke tests on the lab machine that you designate as the test client. They run in the same way as unit tests.

What's in a build definition?

Lab and server build definitions are MSBuild workflows. They have several properties in common, such as the trigger event that starts the build. To define a lab definition, you also specify:

  • The lab environment on which to deploy the system. Typically you would specify a virtual environment, so that if a test fails, a snapshot can be logged.
  • Deployment scripts that perform the deployment. You have to write these scripts. We'll show you how shortly.
  • Test settings, which specify what data to collect from each machine. These override the test settings defined in the properties of the test plan.
  • A server build, which is used to determine what source code to compile. This means that you have to define a server build before you define a lab build.
  • Test suites, which contain the test cases that will be run. Test results will be reported in terms of test cases and requirements. (By contrast, server builds specify simply the assemblies from which test methods are to be executed, and cannot be related to requirements. The test assemblies specified in the server build definition are ignored by the lab build.)

JJ159335.09496F2765D21DD2913CFDF458CC710C(en-us,PandP.10).png

Lab build definition uses a server build definition

Identifying a server build definition

A lab build uses a server build to do its compilation; so before you define a lab build, you must first have a server build defined.

Now you've almost certainly already defined a server build, because you used it to run unit tests on the build server. In fact, you can use one of those definitions. Any server build is suitable, so long as it compiles all the source of the application and tests.

It doesn't matter what unit tests it runs, because they will not be run when it is pressed into service in the lab build. Neither does it matter what its trigger is, because the lab build will start it.

Referencing a server build from a lab build will not prevent it from running according to its own trigger.

But if you would prefer to define a separate server build to be part of the lab build, set its trigger to manual. Run it to make sure that it builds correctly. Refer back to Chapter 2, "Unit Testing: Testing the Inside," for the details.

Creating a lab build-deploy-test definition

Here are the steps you use to create a lab build definition.

  1. In Team Explorer, create a new build definition. Actually, it will turn out to be a workflow, rather than a simple build definition.

    Select the trigger condition, such as Continuous Integration.

    Select LabDefaultTemplate. This means we're creating a lab definition.

    Click Lab Process Settings to open the Lab Workflow properties wizard.

    JJ159335.864361BFA570011AE2A181B581B74113(en-us,PandP.10).png

  2. In the lab process settings wizard:

    In the Environment page, specify that you want the build to start by reverting to your baseline snapshot.

    Select the build that defines which source to compile. In the dropdown, the options you see are build definitions that were created from DefaultTemplate. (The build specified in the test plan properties is ignored.)

    Add invocations of the deployment scripts that you prepared earlier. Each script is executed on the test machine that you designate.

    Set the working directory (unless you make a habit of starting with a change directory command (cd) in the script). Each script is copied to the lab machine and runs there.

    Follow link to expand image

  3. In the Test page, specify the test plan and suites that you want to be executed. The test plan determines what lab environment will be used, what test data will be collected, and might also specify additional files to be deployed on the lab machines. The test suites determine which test cases will be run.

    Pick a test plan and select test suites.

    Select test settings. This overrides the test settings defined in the test plan.

    Save the lab build definition.

    JJ159335.6CC0BE4E77D77FFF874AFAC60E1A4F81(en-us,PandP.10).png

The build will run on the defined trigger.

Automating deployment of ClickOnce applications

ClickOnce applications have a rather different deployment path. This feature allows you to deploy a desktop application straight from Visual Studio by using the Publish Now button in the project properties. There are certain limitations: for example, the application has to have user attention, so it isn't useful for services.

Apart from being easy to deploy from the developer's point of view, the really interesting thing is that whenever a ClickOnce application is started by its user, it checks back to its deployment site to see if there is a new version.

You can put this feature to work in your tests—and test it at the same time.

There are two approaches. One is to install the application on a test machine and allow it to update itself automatically from the latest build. Alternatively, you can use the coded UI test to run the installer explicitly.

Letting the application update itself:

  1. When you first set up your test environment, install the application on the appropriate machine. Do this before you take the snapshot to which the environment will revert at the start of every test.
  2. Add to your server build definition a command script that runs the ClickOnce publication as part of the server build. This is the server equivalent of the developer pressing the Publish button in Visual Studio. To see how to do it, read Building ClickOnce applications from the command line.
  3. Use an empty deployment script for the test machine on which the application will run.
  4. Before you record your test, make sure that a fresh version of the application has been built. Start the recording before you start the application. The application will detect a new version and pop up a dialog asking you for permission to install it. Choose OK, acknowledging that the new version is now part of your test.
  5. Stop recording, generate the code of the test, and then close UI Builder.

This test will run the application and allow the latest version to install itself.

However, it will fail if you run twice for the same version, because it expects to see the dialog that asks permission to install a new version. To allow your test to work in both situations (with or without a new build), open UIMap.uitest, select that step, and in the Properties window, set Continue on error to true.

Using a coded UI test to run the installer

It's a good idea to test the installer explicitly, especially if it has options. You can create a coded UI test to do this.

You will need:

  • An environment where the machine on which you will run the installer is set as the machine on which coded UI tests will be run. If you're testing the installer for a desktop application, you already have that. If you are testing the installation of a server, you will need to create a new environment in which a server machine is designated as the coded UI test machine. You do this in the Advanced page of the New Environment wizard.

  • A deployment script that copies the installer to the test machine, but does not run it:

    rem %1 is the build location – for example \\buildMachine\drops\latest.
    mkdir c:\setup
    rem Replace OrderFullfillmentSetup for each test machine:
    xcopy /E /Q %1\OrderFullfillmentSetup\* \* c:\setup
    
  • Visual Studio must also be installed on this test machine.

Log in to the test machine, and use Visual Studio to record a sequence in which you run the installer.

You should also run the application, to make sure that it has been installed correctly.

Generate code from the recording.

One of the challenges in dealing with automated tests for installation, updates or removal is that it's common for various messages to occur. Often it's a security issue dealing with security credentials or something else that is occurring in the environment under development.

Again, we could set Continue on error for a step where something indeterminate occurs, but it can be useful to log the cause of the error while skipping over it. For this, an explicit try/catch can be useful. Move the code into UIMap.cs so that you can edit it.

Viewing the test results and logging bugs

To display a list of recent test runs, in Microsoft Test Manager, choose Testing Center, Test, Analyze Test Runs.

You can edit the title and comment and, if necessary, the reason for failure of any test run.

You can open the details of any individual test and inspect the data collected from the test.

If necessary, you can also create a bug work item, which will automatically include the test data.

Driving tests with multiple clients

Most server-based systems have interesting end-to-end tests that involve more than one client machine. For example, when a customer orders an ice cream through the public web interface, the order should appear on the warehouse interface until it is dispatched. To test that story, we have to write a test that drives first one interface and then the other.

JJ159335.B052BF420BF88463E60B28D5E7655C8E(en-us,PandP.10).png

An end-to-end test with multiple client machines

The interesting question is, "Where should we execute the end-to-end tests?" The lab framework assumes that there is a single client machine on which the tests will be run.

Potential schemes include:

  • Install all the user interfaces on the same machine. Perform manual tests by driving the user interfaces side by side. From these tests, create coded UI tests that can run on the same configuration.
  • Keep the user interfaces on separate machines. Write a proxy for the end-to-end test that you install on each machine. Install the end-to-end controller on the designated client machine, which could be a separate machine, or on one of the user interface machines. To write the proxies, you would generate CUITs and then edit the source to perform actions in response to messages from the end-to-end controller.

You can use a similar strategy to simulate the effect of external systems.

Summary

Automated system tests improve the stability of your code while reducing the costs of running the tests. At the same time, the cost of writing the tests has to be balanced against the benefits. Coded UI tests are a quick and reliable way of creating automated system tests, but must be used with care because they are vulnerable to changes in the user interface.

In this chapter we've seen how to define a continuous build-deploy-test workflow that runs coded tests in a lab environment.

Differences between Visual Studio 2010 and Visual Studio 2012

  • Single test agent. In Visual Studio 2010 when you prepare a virtual machine for use in automated tests, you have to install the Test Agent, Lab Agent, and Build Agent. These act as proxies for the Test and Build Controllers, installing software, invoking test methods, and collecting test data.

    In Visual Studio 2012, there is just a single agent, the Test Agent. You can install it manually to prepare a virtual machine (VM) for the store. Or, you can have Lab Center install it by using the Repair command on an environment.

  • Specialized Test Projects.****In Visual Studio 2010, there is a single type of test project, to which you can add different kinds of test files such as coded UI tests, load tests, and so on. In Visual Studio 2012, there are different types of test projects, to which different combinations of test files can be added.

  • Compatibility.******You can use a combination of 2010 and 2012 RC version products, and most things work. For example, tests created on Visual Studio 2012 will run in a lab set up in Team Foundation Server 2010.

Where to go for more information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away.

Next Topic | Previous Topic | Home | Community

Last built: Aug 6, 2012