Поделиться через


Chapter 11 – Consolidating Various Types of Performance Acceptance Criteria

 

patterns & practices Developer Center

Performance Testing Guidance for Web Applications

J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
Microsoft Corporation

September 2007

Objectives

  • Learn how to identify and capture performance requirements and testing objectives based on the perspectives of system users, business owners of the system, and the project team, in addition to compliance expectations and technological considerations.
  • Learn how to consolidate this information into a single set of verifiable, complementary performance acceptance criteria.
  • Learn how to review and update the performance acceptance criteria throughout the project as more information becomes available.

Overview

Performance requirements and testing objectives are typically derived from five sources: the perspectives of the system users, business owners of the system, and the project team, as well as compliance expectations and technological considerations. This chapter demonstrates how to blend and consolidate the information collected from these sources into a single set of verifiable, complementary performance requirements and testing objectives.

Determining the desired performance characteristics for a system from particular perspectives is only the first part of determining the overall performance acceptance criteria for that system. After examining the desired performance characteristics from limited perspectives, you must resolve those characteristics against one another. For example, end users may desire sub-second response time for every transaction; business stakeholders may want the system to support millions of users; and compliance criteria may mandate strict security policies.

Individually, any of these characteristics may be achievable, but collectively they may not be possible due to time, technology, and/or budget constraints. Finding a way to achieve these desired characteristics presents team-wide challenges that should be addressed proactively, not reactively after conflicts become apparent through testing.

How to Use This Chapter

Use this chapter to understand how to consolidate various types of performance acceptance criteria based on the perspectives of system users, business owners of the system, and the project team, and on compliance expectations and the technologies involved. To get the most from this chapter:

  • Use the “Terminology” section to understand common terms used in relation to performance-testing requirements and acceptance criteria that will facilitate articulating terms correctly in the context of your project.  
  • Use the “Approach for Consolidating Acceptance Criteria” section to get an overview of the approach to determining performance testing acceptance criteria, and as quick reference guide for you and your team.
  • Use the various activity sections to understand the details of consolidating acceptance criteria based on the perspectives of system users, business owners of the system, and the project team, and on compliance expectations and technological considerations.

Terminology

The following terms and definitions help to distinguish between the various types of performance characteristics.

Term / Concept

Description

Performance requirements

Performance requirements are those criteria that are absolutely non-negotiable due to contractual obligations, service level agreements (SLAs), or fixed business needs. Any performance criterion that will not unquestionably lead to a decision to delay a release until the criterion passes is not absolutely required ― and therefore, not a requirement.

Performance goals

Performance goals are the criteria that your team wants to meet before product release, although these criteria may be negotiable under certain circumstances. For example, if a response time goal of three seconds is set for a particular transaction but the actual response time is 3.3 seconds, it is likely that the stakeholders will choose to release the application and defer performance tuning of that transaction for a future release.

Performance thresholds

Performance thresholds are the maximum acceptable values for the metrics identified for your project, usually specified in terms of response time, throughput (transactions per second), and resource-utilization levels. Resource-utilization levels include the amount of processor capacity, memory, disk I/O, and network I/O that your application consumes. Performance thresholds typically equate to requirements.

Performance targets

Performance targets are the desired values for the metrics identified for your project under a particular set of conditions, usually specified in terms of response time, throughput, and resource-utilization levels. Resource-utilization levels include the amount of processor capacity, memory, disk I/O, and network I/O that your application consumes. Performance targets typically equate to project goals.

Performance testing objectives

Performance testing objectives refer to data collected through the performance-testing process that is anticipated to have value in determining or improving product quality. However, these objectives are not necessarily quantitative or directly related to a performance requirement, goal, or stated quality of service (QoS) specification.

Approach for Consolidating Acceptance Criteria

Consolidating acceptance criteria can be thought of in terms of the following activities:

  • Investigate end-user requirements.
  • Collect business requirements.
  • Determine technical requirements.
  • Research standards, compliance, and contracts.
  • Establish performance-testing objectives.
  • Compare and consolidate performance characteristics.
  • Review and update the performance plan.

These activities are discussed in detail in the following sections.

Investigate End-User Requirements

Once your application reaches production, the most important performance characteristic that matters is that application users must not be frustrated with poor performance. If users are annoyed, they will find an alternative to your application. If they do become frustrated, it will not matter how many users your application supports, how much data it can process, or how efficient it is in its use of resources — in fact, even if you accomplished all of the above and were the first to market the application with the new features, it will still mean nothing to the user.

Users of your application will not know or care about the results of the performance tests, how many seconds it takes the screen to display past a user’s normal threshold for “too long,” or what the throughput is. Primarily, application users notice only whether the application seems slow or not. Also, users’ reactions (or not) can be based on anything, including how they feel at the time or their overall experience with applications.

Quantifying end-user satisfaction and/or frustration can be challenging but is far from impossible. To quantify end-user satisfaction, all you need is an application and some representative users. You do not need a completed application, since a prototype or demo will suffice for a first pass. For example, with only a few lines of code in the HTML of a demo or prototype, you can control the load time for each page, screen, graphic, control, or list. Using this method, you can create several versions of the application with different response characteristics. You can then ask users to try each version and tell you whether they find that version unacceptable, slow, reasonable, fast, or whatever terms correspond to the captured goals.

Because you know the actual response times, you can start equating those numbers to the users’ reported degrees of satisfaction. This is not an exact science, but it generally works well for goals — especially if you follow up by asking the same questions about the application’s performance each time you test it. This applies for functional testing, user acceptance testing, beta testing, or any other reason, as you are measuring response times in the background as testers interact with the system. This approach allows you to collect more data and enhance your end-user performance goals as the application evolves.

Collect Business Requirements

The real engineering challenge is not only being able to meet your business requirements, but also achieving them on time and within budget. Before you can find this balance, you must first determine those business requirements. For example, you will need to determine the budget for new hardware, and what existing hardware is available. You will need to know how rigid the target delivery date is, whether an initial release to a limited number of users is acceptable, and what project aspects take priority. For instance, if a situation arises where you can achieve the desired performance on time and on budget by eliminating valuable features, you must ask yourself whether that tradeoff is worth considering.

Creating a performance plan — not a performance test plan, but an overall plan for achieving desired performance characteristics — is the process most teams use to demonstrate how to spend the technical and financial performance budget in order to create the desired experiences, and to determine the overall cost of that performance in terms of dollars, resources, and schedule. Some areas worthy of significant consideration in a performance plan include:

  • Throughput and latency (e.g., do you need to ensure that deploying this application will not adversely affect other applications that use the same network?)
  • Responsiveness (e.g., are there components of your application that need to interact in a timely manner, such as a load balancer that skips a particular Web server if it does not respond in <200 milliseconds?)
  • Capacity (e.g., can you afford the infrastructure to support up to 500 users under standard conditions?)
  • Cost of entry (e.g., is it viable to achieve the end-user requirements with existing hardware?)

You may also be affected by external business requirements in addition to the internal requirements. An example of an external business requirement is achieving performance characteristics that are “as good as or better than” competing applications. Every application in every vertical industry will have different methods and sources for determining its competitive landscape. Even if you cannot run load tests against a competitor, you can still collect valuable response time information by, for example, periodically surfing the site and from researching sites such as www.keynote.com.

The bottom line: do not assume that your site will not be compared against others simply because there is no published standard; de-facto standards are more likely to set your users’ expectations than formal metrics do.

Determine Technical Requirements

Usually, technical performance characteristics only indirectly relate to the other categories of requirements. These characteristics frequently concern the use of particular technologies, and usually consist of targets and thresholds that are specific metrics related to particular resources.

For example, it is generally agreed that once a server’s processor utilization reaches 80 percent, user requests will start to significantly queue, and therefore users will experience poor service even though the server might still be running fine. Based on this, many teams will set a processor utilization target of 70 percent and a threshold of 80 percent. By doing so, they are telling the performance tester to alert the team if he or she observes readings of more than 70-percent processor utilization sustained for more than a few seconds, and to register a defect if a processor utilization rate of more than 80 percent is observed for more than a few seconds under the target workloads.

Capturing and managing technical performance-testing requirements generally requires that you:

  • Determine objectives of performance testing.  This activity ensures that technical requirements are prioritized appropriately for the performance concerns of the project.
  • Capture or estimate resource usage targets and thresholds.  These may come from architects, administrators, design documents, or hardware/software technical documentation.
  • Capture or estimate resource budgets or allocations.  Typically assigned to developers by architects and administrators, these represent the amount of a specific resource a particular component may use so that the system as a whole will not violate its targets or thresholds.
  • Identify metrics.  There is often more than one way to measure resource consumption. The team must agree on the specific method and metric for each technical requirement to enable communication and understanding.
  • Communicate results.  Collected data that relates to technical requirements is valuable to various members of the team in different forms as soon as it is available. Sharing the data immediately assists with design, tuning, and identifying requirements that may need to be revisited.
  • Keep abreast of changing objectives, targets, and budgets.  Just like other types of requirements, technical requirements are bound to change during the course of the project. Expect this and keep up with the changes.

Research Standards, Compliance, and Contracts

The performance characteristics in this category are frequently the least negotiable, whether or not they have a significant impact on end-user satisfaction, budget, schedule, or technology. Determining performance requirements in this category generally involves the following tasks:

  • Obtaining documents and standards that spell out legally enforceable performance requirements.
  • Interpreting those legally enforceable performance requirements within the context of your project.
  • Determining if any de facto standards or competitive comparisons apply to your project. If so, collect the relevant data.
  • Getting stakeholder approval for the interpretations. Because the company or even individual stakeholders could be held legally liable if the application does not meet the requirements, this is a situation where stakeholder approval is highly recommended.

Establish Performance-Testing Objectives

Determining performance-testing objectives can be fairly easy. The challenge is that the performance tester does not always have easy access to either explicit or implied objectives, and therefore frequently must conduct a systematic search for these objectives. Determining performance-testing objectives generally involves the following tasks:

Determining the overall objectives for the performance testing effort, such as:

  • Detect bottlenecks that need tuning.
  • Assist the development team in determining the performance characteristics for various configuration options.
  • Provide input data for scalability and capacity-planning efforts.

Reviewing the project plan with individual team members or small groups. Ask questions such as:

  • What functionality, architecture, and/or hardware will be changing between the last iteration and this iteration?
  • Is tuning likely to be required as a result of this change? Are there any metrics that I can collect to help you with the tuning?
  • Is this change likely to impact other areas for which we have previously tested/collected metrics?

Reviewing both the physical and logical architecture with individual team members or small groups. As you review the architecture, ask questions such as:

  • Have you ever done this/used this before?
  • How can we determine early in the process if this is performing within acceptable parameters?
  • Is this likely to need tuning? What tests can I run or what metrics can I collect to assist in making this determination?

Asking individual team members about their biggest performance-related concern(s) for the project and how you could detect those problems as early as possible.

While it is most valuable to collect performance testing objectives early in the project life cycle, it is also important to periodically revisit these objectives and ask team members if they would like to see any new objectives added.

Compare and Consolidate Performance Characteristics

After you have identified performance characteristics from each of these categories, it is important to compare and consolidate them. Often, the best way to do so is to employ a cross-functional team while developing or enhancing the performance plan. Some of the key discussions, activities, and tradeoffs to consider include:

  • Determining the technical characteristics necessary to achieve end-user and compliance requirements.
  • Comparing the technical characteristics necessary to achieve end-user and compliance requirements to the technical requirements you have already collected. Make note of significant differences that impact the project.
  • Estimating the cost — in terms of schedule, resources, and dollars — of achieving the revised technical requirements.
  • Reviewing the overall objectives for conducting performance testing to determine if those testing objectives support the technical requirements and business needs.

The key here is to focus on the experience you are trying to create, and from that, determine the associated costs of achieving that experience. In many cases this can lead to reverse-engineering desired end-user performance characteristics into actionable technical requirements, and then extrapolating those technical requirements into such relevant costs as time and money.

Review and Update the Performance Plan

As with many aspects of software development, the performance plan is a moving target. The previous six activities, if you accomplish them early in the development life cycle, will likely lead to a tentative application design that can be evaluated against technical and business requirements. As long as that design continues to make sense when evaluated against the requirements, stick with it and flesh it out incrementally as you go. When that design stops making sense when evaluated against the requirements, recalibrate until it does make sense.

You are likely to iterate through this process many times throughout the development cycle, each time potentially updating the application design, the performance plan, and the performance requirements. Do not try to flesh it out all at once, and do not expect perfection. The key is to keep the design, the plan, and the requirements in sync and moving together to achieve satisfied end users and happy stakeholders once the application goes live.

Summary

Performance requirements and testing objectives are typically derived from the perspectives of system users, business owners, and the project team, as well as compliance expectations and technological considerations.

To have confidence in the completeness of your system acceptance criteria, they should be based on the information collected from all these different perspectives, which should result in a single set of verifiable, complementary performance requirements and testing objectives.

It is important to always keep in mind that the performance characteristic that matters most is that application users must not be frustrated with poor performance.

patterns & practices Developer Center