About pipeline tests
Azure DevOps Services | Azure DevOps Server 2022 - Azure DevOps Server 2019
This article describes commonly used terms used in pipeline test report and test analytics, and provides tips for better testing in Azure Pipelines.
Term | Definition |
---|---|
Duration | Time elapsed in execution of a test, test run, or entire test execution in a build or release pipeline. |
Owner | Owner of a test or test run. The test owner is typically specified as an attribute in the test code. See Publish Test Results task to view the mapping of the Owner attribute for supported test result formats. |
Failing build | Reference to the build having the first occurrence of consecutive failures of a test case. |
Failing release | Reference to the release having the first occurrence of consecutive failures of a test case. |
Outcome | There are 15 possible outcomes for a test result: Aborted, Blocked, Error, Failed, Inconclusive, In progress, None, Not applicable, Not executed, Not impacted, Passed, Paused, Timeout, Unspecified, and Warning. Some of the commonly used outcomes are: - Aborted: Test execution terminated abruptly due to internal or external factors, e.g., bad code, environment issues. - Failed: Test not meeting the desired outcome. - Inconclusive: Test without a definitive outcome. - Not executed: Test marked as skipped for execution. - Not impacted: Test not impacted by the code change that triggered the pipeline. - Passed: Test executed successfully. - Timeout: Test execution duration exceeding the specified threshold. |
Flaky test | A test with non-deterministic behavior. For example, the test may result in different outcomes for the same configuration, code, or inputs. |
Filter | Mechanism to search for the test results within the result set, using the available attributes. Learn more. |
Grouping | An aid to organizing the test results view based on available attributes such as Requirement, Test files, Priority, and more. Both test report and test analytics provide support for grouping test results. |
Pass percentage | Measure of the success of test outcome for a single instance of execution or over a period of time. |
Priority | Specifies the degree of importance or criticality of a test. Priority is typically specified as an attribute in the test code. See Publish Test Results task to view the mapping of the Priority attribute for supported test result formats. |
Test analytics | A view of the historical test data to provide meaningful insights. |
Test case | Uniquely identifies a single test within the specified branch. |
Test files | Group tests based on the way they are packaged; such as files, DLLs, or other formats. |
Test report | A view of single instance of test execution in the pipeline that contains details of status and help for troubleshooting, traceability, and more. |
Test result | Single instance of execution of a test case with a specific outcome and details. |
Test run | Logical grouping of test results based on: - Test executed using built-in tasks: All tests executed using a single task such as Visual Studio Test, Ant, Maven, Gulp, Grunt or Xcode will be reported under a single test run - Results published using Publish Test Results task: Provides an option to group all test results from one or more test results files into a single run, or individual runs per file - Tests results published using API(s): API(s) provide the flexibility to create test runs and organize test results for each run as required. |
Traceability | Ability to trace forward or backward to a requirement, bug, or source code from a test result. |
Best practices
Ensuring application reliability requires comprehensive testing in Azure Pipelines, with unit tests and integration tests being essential. Testing integrations in cloud environments, particularly serverless applications, poses challenges due to distributed architectures, misconfigured IAM permissions, and service-to-service integration issues.
To address this, consider running your code locally while interacting with genuine Azure services, facilitating realistic tests and enabling debugger tools suitable for automated testing. Implementing this approach requires provisioning ephemeral Azure resources. Ideally, create separate accounts for each environment; alternatively, dynamic provisioning within Azure pipelines is possible, although this increases execution time and necessitates careful resource decommissioning planning. To minimize naming conflicts, avoid explicit resource naming unless necessary and include environment names in resource names.
Help and support
- See our troubleshooting page
- Get advice on Stack Overflow, and get support via the Developer Community