Share via


Troubleshooting Test Execution

If a test fails to run, you can investigate the failure by checking the test environment; this includes the way the test is set up and the settings in the active test settings. In some cases, such as those related to deployment, failures are independent of test type. In other cases, the test type determines how and what to investigate. For investigation tips by test type, see Details by Test Type.

Errors that involve tests are reported to you at either of two levels:

  • Test-level errors. By using the test results window, you can double-click the test result or right-click the test result and then select View Test Results Details. This displays a test [Details] page that shows error messages and other details, depending on the test type, such as stack-trace information for unit tests. An example of a test-level error is a test timeout error, which occurs if the test's timeout limit is reached.

  • Run-level errors. Errors at the run level, which include test settings errors, are reported through the Test Results window. When a run-level error occurs, a link appears on the Test Results window status bar. Choosing this link displays more details about the error in the Test Run [Details] page. You can also display the Test Run [Details] page by choosing Run Details on the Test Results window toolbar. An example of a run-level error is a run timeout error, which occurs if the run's timeout limit is reached.

Not all problems cause a test to fail to run. After you have chosen to obtain code coverage data, running tests can generate a warning if your project has certain build settings. For more information, see Using the AnyCPU Build Setting when Obtaining Code Coverage Data.

Deployment Errors

Certain errors can be encountered for any test that can run automatically, which means any test type other than manual. These errors are frequently related to the deployment of tests. When a test is deployed, the file that contains it is copied to another folder, either to a location on the local computer or to a remote computer.

For unit tests, for example, the .dll file that was built from the test project is the file that must be deployed. If this binary file cannot be deployed, any unit tests that it contains are immediately marked as Failed in the Test Results window when you run them.

To fix this error, verify that the files are available on your local computer and that there were no build errors the last time that you rebuilt your test binaries.

Not only binary files can be deployed. You might specify that a particular file, such as a data file, is required by a test and must therefore be deployed with the test. At deployment time, if this file cannot be found because it has been moved or deleted, the test cannot run correctly and an error occurs. Also see Details by Test Type for information about this error with regard to generic tests.

To investigate this error, first note the files and folders specified on the Deployment page of the dialog box used to edit test settings. For more information, see Create Test Settings to Run Automated Tests from Visual Studio. Then check those files and folders on disk to make sure they are present and that their names are identical.

Your solution may have multiple test settings files. If this is the case, make sure that you examine the test settings that was active when the test error occurred. To determine which test settings was active, examine the Test Run Details page for that test run.

For more information about active test settings files, see How to: Select the Active Test Settings from Microsoft Visual Studio.  

Errors in Reporting Remote Test Results

When you run tests remotely, the test results might not display. This error is probably related to the remote nature of the test run.

Like test results from local test runs, results from remote runs are reported to you locally. The reporting of certain remote test results depends on the ability of Visual Studio Ultimate, or Visual Studio Premium to copy generated test results files from the remote test computer to your local computer.

If you find errors that occur with remote test results, start by determining whether the network connection between the remote computer and the computer on which you are running Visual Studio has been interrupted.

For more information, see Setting Up Test Machines to Run Tests or Collect Data.

Instrumentation Errors

To enable the reporting of code-coverage, the binary files that are tested must be instrumented and then deployed before tests are run on them.

A failure to instrument the binary file causes the reporting of code coverage to fail. After the test run is completed, the Test Run Details page displays an error message that states that code coverage could not be reported, and it also states the cause.

Possible causes for a failure to instrument a binary file in place are that it is marked as read-only or it is being used by another process. To fix the error of a read-only binary file, first examine the attributes of the binary file to make sure that it can be written to. To know which binary files to check, open the Code Coverage page of the active test settings; it is here that you specified files for instrumentation. For more information, see How to: Obtain Code Coverage Data.

Another cause for failure of code coverage when using in-place instrumentation can occur when you are using one or more unit tests together with a manual test. During the manual test, the tester runs the production code that is being tested. If the tester presses F5 or CTRL+F5 to start or debug the code, the project's executable file is rebuilt, which removes the instrumentation.

Also, make sure that no other process is using the binary file. For example, make sure that you do not have the file open in another instance of Visual Studio.

When instrumenting Strong-Named Assemblies, you can encounter other errors that are related to re-signing the assembly. For more information, see Instrumenting and Re-Signing Assemblies.

Using the AnyCPU Build Setting when Obtaining Code Coverage Data

You can obtain code coverage data only when you test code in 32-bit assemblies. To guarantee this condition, set a particular build property:

Note

This warning does not apply to C++ projects because AnyCPU is not a platform choice for C++ projects.

If you build your project with the value of AnyCPU, tests that are run on the resulting assembly produce code coverage data, but the test run also generates a warning. You can see the text of the warning on the Test Run Details Page:

Warning VSP2013 : Instrumenting this image requires it to run as a 32-bit process.  The CLR header flags have been updated to reflect this.

This warning means that the assembly has been recompiled with the x86 property applied for the purpose of obtaining code coverage data during this test run. To avoid this warning, compile any assembly for which you want code coverage data with the x86 setting.

Note

If your application is meant to run on both 32-bit and 64-bit computers, remember to recompile it using the AnyCPU setting after testing has completed.

Running Unit Tests Can Lock a C++/CLI Test Assembly

You might encounter a situation in which the test execution engine opens and locks an assembly in your test project. When this happens, you cannot, for example, save changes to the assembly. This problem might occur in the following situations:

  • Case 1: You have disabled deployment for your test project, TestProjectA. TestProjectA was compiled in C++/CLI. The code within TestProjectA defines an attribute class, and that attribute decorates at least one of the test methods in TestProjectA. At this point, when you run unit tests in TestProjectA, the test execution engine opens TestProjectA.DLL and can leave it in a locked state.

  • Case 2: Your test project, TestProject1, contains a DLL that was compiled from a second test project, TestProject2. TestProject2 was compiled in C++/CLI. The code within TestProject2 defines an attribute class, and that attribute decorates at least one of the test methods in TestProject2. Now, when you run unit tests in TestProject1, the test execution engine opens TestProject2.DLL and can leave it in a locked state.

In both of these cases, the solution might have two parts. First, perform the following steps.

  1. On the Tools menu, select Options.

    The Options dialog box opens.

  2. Expand Test Tools and choose Test Execution.

  3. Under Performance, clear the check box for Keep test execution engine running between test runs.

After you complete these steps, if the problem persists, do the following:

Change your code so that the test project that was compiled in C++/CLI does not need to be loaded in the default AppDomain. One way to do this is to move the definitions of the custom attributes that you use into a separate assembly that is implemented in C#.

Details by Test Type

Certain errors frequently occur or primarily while you are running particular test types, as described in this section.

  • Ordered tests. Errors that are encountered with ordered tests frequently involve file deployment. Before the test engine can run an ordered test, it must locate and then deploy all the test files for all the contained tests, in addition to all other required files. Failure to do this for any of the individual tests would cause an error.

  • Generic tests. Deployment errors can also occur when you run generic tests. You can specify files to be deployed in two ways for generic tests: on the Deployment page of the test settings, and on the authoring page of the Generic test itself. The test might fail if you neglect to list all the required files or if the files cannot be found at the locations you specified.

    These two different ways to deploy files cause errors to appear at different levels. If the deployment error relates to a file specified in the Generic test authoring page, the error will surface at the test level. If the deployment error relates to a file specified in test settings, the error will surface at the run level.

See Also

Tasks

How to: Force Tests to Stop Running After a Specified Period of Time

Concepts

Instrumenting and Re-Signing Assemblies

Reviewing Test Results