Freigeben über


Analyzing a performance report in Visual Studio Team System 2005 Part 3: The Object Allocation and Object Lifetime views.

            In this installment of my walkthroughs for using the new profiler in Visual Studio Team System, I’ll be showing how to collect and view some interesting data about managed objects. To enable the collection of managed object data, you’ll need to set the proper settings in the performance session property page, check here if you don’t know how to do this. Remember that object data is only for managed applications, you won’t be able to collect this info from native applications.

            For this walkthrough we will be using the same program that I have used in my two previous analysis walkthroughs (check them out here and here). So if you want to follow along with this analysis, I suggest you check out those two walkthroughs first. At this point, we have used the other analysis views to locate two possible problem points (the functions Rational.reduce and Array.GetUpperBound) in the application we are profiling. But before we jump into the code to try fixing those problems, let’s see if the managed object data can tell us any more possible issues with our code.

            After enabling object allocation and object lifetime data (see above for how to do this) we will run our profiling scenario as normal and open up the resulting analysis (.vsp) file. Now the two views that had no data before, Allocation view and Objects Lifetime view, are populated with data. First off, we’ll take a look at the Allocation view, pictured below.

            In the far left column, we see the different object types that were allocated during the profiling run. By expanding this column (as I have done for String and Int32) you can break down the type data by the different functions that created that object. The default columns tell you the number of instances created, the total bytes allocated and the percent of total bytes allocated. This view can help you pick out objects that are eating up the largest part of you memory. A good way to analyze this data is to sort by percent of total bytes to see what objects are eating up the most space.

            In addition to knowing how many bytes are being used by objects, it can be handy to know how long objects are being held onto in managed applications. In managed code object de-allocation is handled by the built-in garbage collector. So the programmer does not know exactly how long an object is held in memory. The Object Lifetime view (shown below) can help you diagnose issues of this type.

            As in Allocation view, the far left column lists the different types of object that were allocated during the profiling run. The next columns (Gen 0 instances collected, Gen 1 instances collected, Gen 2 instances collected, Large object heap instances collected and Instances alive at end) detail how long objects of each type were kept before being garbage collected. Generation zero garbage collections are the garbage collections that happen most frequently, mainly for objects that exist only for a short time. Generation one collections occur less frequently then generation zero and generation two as the least frequent of all. So if you have many large objects being held around until generation two, they are eating up a large portion of memory that you may have not known you were using. Also, some objects are de-allocated after the end of the profiling run; those objects are listed under the Instances alive at the end column. In my example program, the profiling run was so short that all objects were disposed of in generation zero or held until the end, making this view not as useful in diagnosing performance issues. However, for longer running profiling sessions, this type of data can be invaluable.

            The Allocation and Object Lifetime views are quality tools for finding some performance errors in managed applications. But in our specific demo, the performance scenario was not long enough to find any possible performance issues in these two views. In my next walkthrough I’ll look at using the issues we discovered in the analysis views to find and fix the performance problem. Then we will use the profiler to quantify how much of an improvement in our program we created.

Comments

  • Anonymous
    April 08, 2005
    The last three installments of this walkthrough have helped show you how to use the new profiler in Visual...

  • Anonymous
    April 08, 2005
    The final two parts of my walkthrough for using the IDE to analyze a performance report are posted here...

  • Anonymous
    June 09, 2005
    I’ve pulled together all of the technical articles and walkthroughs from the various team member blogs...

  • Anonymous
    April 03, 2007
    Comparing performance reports with the Visual Studio Team System Profiler With the recent release of

  • Anonymous
    June 14, 2008
    In Visual Studio 2008, we added the ability to quickly determine the most expensive call stack in your