Partager via


Application Performance Testing at MSCOM

One of the very important tasks that falls on the Test teams here at MSCOM is performance testing of our web applications. The topic of performance testing is a very large subject and one that probably cannot be covered in just one posting so instead of trying to cover everything at once we are going to try and break up the subject and hopefully produce some follow up postings on the subject.

When it comes to performance testing we start with gathering information, so let’s begin there. We are of the belief that in order to be successful in your performance testing you have to know some information before you just jump right in. The first pieces of information we want to determine are:

· What is the acceptable performance for the web application?

· Is there any historic performance data for benchmarking?

· What information needs to be reported about the performance testing (are there performance score cards available from other groups/teams for comparison).

The next step is to determine if any application specific work needs to be done before starting the performance testing or developing a performance test plan, such as:

· Are there any application specific barriers that need to be handled e.g. Passport sign-in emulation, any other external dependencies?

· What are the important areas of the application, where performance testing is critical (prioritize the areas of the application for performance testing)?

· Does the application need component level performance testing?

· Does the development code require any hooks in order to get any specific performance information?

The acceptable level of performance can be very tricky to determine. Here at MSCOM we generally have to determine the appropriate level from information that we get from various groups or stakeholders. Multiple groups and individuals have a say in what is acceptable performance for a given application. The business owners will know how many customers or visitors they want to support while the system engineers and database administrators will know how much additional CPU, memory and disk utilization the servers can support. All of this information then determines what acceptable performance is for our web applications.

After determining what the performance should be we now have to figure out how to measure the performance. This depends a lot on what the application looks like. We have some applications that do multiple different things ranging from serving up static or dynamic content, querying large or small data stores and even some that send email. This is where you really have to know what your application does to determine what and how you are going to measure the performance.

There are a number of resources available to help you identify what to measure when running your performance tests. There are books and websites that are dedicated to this subject, a couple to take a look at are:

· Performance Testing Microsoft© .Net Web Applications

· Improving .NET Application Performance and Scalability

For a typical web application here are some of the major things that we look at:

· Requests per second (how may HTTP requests can be handled per second?)

· Page views per second (How many web pages can be requested? Since a page is almost always made up of more than one request this number will be different than requests per second.)

· CPU utilization (how much processing power does the web application take?)

· Memory usage (how much memory does the application need?)

· Throughput (how many bytes per second?)

· Response Time (how long it takes for a given request to complete?)

These results should be summarized and then analyzed to identify performance bottlenecks or issues. There are a myriad of other things that can and should be measured and analyzed and we encourage you to do some research on what each of these is and how it relates to your applications.

The final thing that we want to mention in this post is some of the tools that we use to help us with our performance testing. We have recently started using the latest release of Visual Studio which includes some very cool testing features in the Team Editions. The Team Edition for Software Testers and the Team Suite edition both contain the testing features; one of which is the creation of load tests. This allows us to create and run our unit tests and performance tests all from Visual Studio. In addition to running load tests Visual Studio will collect performance counters for analysis.

Like we said at the beginning of this posting, we are planning on putting together some more information on how we do performance testing at MSCOM so please be on the lookout for future postings.

Comments

  • Anonymous
    February 06, 2007
    Seems like the start of a nice collection of posts :) The tests you mention above seem to be typical technical statistics that are collected. Quite often this is enough, but for many users even these don't show the full picture for app performance since there are factors like page weight, networking at play. It's a long shot, but my question is, do you have an overall testing framework in place that takes into account these 'other' factors when doing performance evaluation? Often I see an 'us versus them situation' and Mercury LoadRunner is suggested as a tool that will do the end to end testing.
  • I do realise that you can often track down these issues, by tracing at various hops and points in the app - but sometimes this is not acceptable...