다음을 통해 공유


Comparing performance reports with the Visual Studio Team System Profiler

Comparing performance reports with the Visual Studio Team System Profiler

 

With the recent release of the first Beta for Visual Studio Team System (codename Orcas) customers will get their first chance to see all the great new features that we are adding to the product. For the profiler in particular we’ve added some very cool new features that I’m really happy to finally be able to reveal publically. Remember that the profiler is only included with VSTS: for Developers and VSTS: Team Suite, so make sure that you have one of those SKUs if you are following along.

Comparison Reports from the IDE

 

The first feature that I want to cover is the new comparison document that we’ve introduced to help users compare profiling data from two different profiling runs. Say that you have just checked in a possible fix for a performance issue and you want to compare a new performance report to one that existed before the change to see if your change really helped. Our new comparison features make these types of questions easy to answer. Comparing two files of performance data is a very common scenario for customers, especially when dealing with regression testing, so this was a priority feature for us in this release. After all, performance data when taken in isolation, without goals to hit or old values to compare with, can be pretty hard to work with. Our goal with this new comparison work is to help customers to make better use of their performance data to achieve the performance results that they desire for their applications.

I would be remiss if I did not take a quick second here to dive a little deeper into the importance of setting performance goals for your applications. Too often developers end up in the situation of closing in on product release and realizing “my app is just way too slow.” Now, it is perfectly understandable that developers want to save performance optimization for the end of the product cycle; after all if the underlying structure is going to change greatly why waste too much time early on trying to tweak things to run as fast as possible? But the real issue with the situation above is in the generalness of the “my app is just way too slow” part. What exactly do you mean by “too slow?” What parts of the app are too slow? What type of performance do customers expect from your app? How long does it take for other similar products to do the same task? For our new comparison features to be really useful you will need to take some time before and during development to get at least basic answers to some of the questions above.

So now that you know to set performance goals for major functionality of your application how can you use the new comparison functionality of the profiler to meet those goals? First off, let’s say that you have just submitted some major code changes to your product and you want to see how those changes have either improved or degraded the performance of your app. Start off by moving to the new “Developer” top level menu in VS. This menu (Figure 1.1) is new, so expect that the names and locations of the items in it may change by the time that RTM rolls around. From this menu select the “New Comparison Report” option to start the process of creating a new comparison file.

top level

(Figure 1.1 – The new VSTS developer menu)

 

Next, the UI will pop up a dialog that will ask you to select two performance data files to compare. Note that in this screen you can select any combination of VSP and VSPS files provided that both files are either both sampling mode files or both instrumentation mode files. If you have used the VSTS profiler before you will know that VSP files are the report files created by the profiler, but you may not be familiar with VSPS files. I will cover VSPS files in more depth in a following article, but in a nutshell they are VSP files that have been saved after performance analysis, resulting in a much smaller file that can be opened much quicker than a full VSP file. However, after saving as a VSPS file you lose some of the advanced analysis options that you had with a VSP file. For this example, I’ll just be comparing two profiling runs of a simple console application. This application concats a bunch of strings (a common performance issue) and since VSTS code analysis flags this as a possible issue I’ve converted it to use the StringBuilder class instead. The second analysis file is from the performance run that uses StringBuilder and I want to compare it to the earlier run to see if I have made any performance gains.

If you select one sampling mode performance file and one instrumentation mode performance file then you will get an error popup about incomparable file types. Sampling and instrumentation work in totally different ways and with completely different column sets, so we don’t want to allow comparisons between the two types of files.

After selecting the two files to compare and clicking “ok” a new document window will open in the IDE and you will see progress bars for the analysis of both of the performance files. After the loading finished you will see a comparison document like the one below open in the IDE (Figure 1.2).

raw diff

(Figure 1.2 – The comparison report view)

 

                So there is a heck of a lot of information in that comparison report, so let me see if I can help to break it down. In the top left “Comparison Files” groupbox we see the filenames of the two report files that we are comparing (Concat.vsp and StringBuilder.vsp). In the “Comparison Options” groupbox we specify what values to use for comparison and what magnitude of change we require to actually show items in the comparison report. The “Table” value in this group specifies the type of objects that we are comparing. The default value is to compare functions but you can switch this to compare other values like modules or IPs. The value in the “Column” combo box specifies what report column to use to compare between the baseline report and the new report. The column values available for comparison will change based on the comparison table that you have selected. By default, we are comparing the “Exclusive Samples %” value for the functions table. If you are not familiar with the various performance columns (you can get more info on performance report columns here) “Exclusive Samples %” is the number of total samples that were taken in the specific function and not in any subfunction. Finally we have the “Threshold” value. This value specifies the minimum difference that there must be between new and baseline values for the function to actually be listed in the comparison report. For example, let’s change the threshold value to 1.0 and click the apply button to apply the changes. Now the comparison report has been trimmed to just show functions that changed by one percent of the overall samples (figure 1.3).

trimmed

(Figure 1.3 – The trimmed comparison document)

                Now that we have trimmed our report to a more manageable size let’s take a look at the body of the comparison report. In the “Comparison Column” we see the names of the functions that are being compared between reports. Next we see columns that show the value of “Exclusive Samples %” in both the baseline report, in the comparison report and the delta between the two (comparison value – baseline value). Note that now that we have changed our threshold value to 1.0 only functions with an absolute delta value greater than 1.0 show up in the report. If we were to change threshold to 0.0 we would see every function in both reports, even if the function did not change value at all. Functions that were only present in the baseline report or only present in the comparison report also show up in this report as long as their value (and thus their delta value) is greater than the threshold value.

                So what does the comparison report tell us about our code change from using String.Concat to using StringBuilder.Append when constructing our strings? Well it appears that our overall change to system performance was basically a wash, we used to spend 4.4% of our time in the AppendItemsConcat function and now we spend 4.62% of our time in the AppendItemsStringBuilder function. At a lower level we spend less time in the String.wstrcpy function and more time in memcopy, this is to be expected based on how String.Concat and StringBuilder handle combining strings. Also, if we see something in the comparison report that we want to investigate further we can right click on that function and choose “show matching row in baseline file” or “show matching row in new file.” These options will open up (this open should be very quick since we already have it analyzed the file for the comparison) the matching performance report so that you can dig deeper into the calltree for that specific function. You can see the result below of right-clicking the “show matching row in new file” on the String.wstrcpy function. The new performance file has been opened and the String.wstrcpy function has been highlighted in the function view (fig 1.4).

                vsp report

                (fig 1.4)              

From the investigation above we can see some differences between the Concat implementation and the Stringbuilder implementation but we really don’t see any measurable performance gains. Perhaps instead of saving CPU cycles we actually ended up saving time and space in allocations. If we want to take a look at the different allocation patterns for the two scenarios we can create two memory profiling reports and compare them. Note that memory allocation reports are a separate type of report so you will not be able to compare allocation reports to non-allocation reports. In the example shown below we can see the difference in managed allocations between the String.Concat and the StringBuilder scenarios (Figure 1.5).

           Memory Report

           (Figure 1.5 – A managed memory comparison report)

In this comparison I’m looking at the “Types” table and checking the difference in bytes allocated for that specific type (Exclusive Bytes). As we can see, for the cost of allocating one StringBuilder and a Char[] I’ve cut down the amount of string bytes allocated in half! Not too shabby. Outside of specific performance investigations the new comparison reports can be a really handy learning tool for when you want to compare a few different approaches to the same scenario.

Comparison Reports from the Command Line

 

                So now that I’ve show you how to work with these nifty comparison reports from the IDE you may wonder how you would integrate this functionality into something like a script driven regression prevention system. We’ve foreseen this need (and we know that not everyone loves the VS IDE like I do) so we added on comparison report functionality to the command line performance report tool VSPerfReport. VSPerfReport can be found in the Team Tools\Performance Tools directory under the main visual studio directory in Program Files.

                If you run -? on VSPerfReport.exe you will see the new comparison (called diffing in the command line tool) in the “file diffing” section.

                   ---- File Diffing ----

/diff Activate diffing mode for comparing two vsp files (summary options will be ignored in diff mode)

/diffthreshold:[value] Below this value of difference the differ will regard two values as the same. Also, new data with values under this threshold will not be shown.

/difftable:[tablename] Use this specific table to perform diffing. The default is the functions table.

/diffcolumn:[columnname] Use this specific column to perform diffing. The default is the exclusive samples percent column.

/querydifftables List the valid diff tables and columns for the two vsp files provided.

To perform a simple comparison using all the default values just type something like the following:

Vsperfreport /diff Concat.vsp StringBuilder.vsp

This will compare the two reports using the “Functions” table and the “Exclusive Samples Percent” column with a default threshold level of 0.1. This command will generate a CSV (comma separated value) file that contains the same data that we would see in the IDE comparison report.

Location, Function Name, Old ExclSamplesPercent, New ExclSamplesPercent, Delta

Both,"System.String.wstrcpy(char*,char*,int32)",10.35549,6.076618,-4.278869

<extra function rows snipped for brevity>

                In this text report you will see the location of the function (Both, OrigOnly, NewOnly) as well as the name, old value, new value and delta. Remember that by default this has a threshold value set, so if you don’t see all the functions that you expect some of them may be under the threshold value.

                Just like in the IDE reports you can tweak the table and column used for the comparison. To see all of your options just run /querydifftables with the /diff command to see the output below (customized for report type of course).

                Available Tables and Columns for Diff:

Table: Function

        Column: InclSamples -- (Inclusive Samples)

        Column: ExclSamples -- (Exclusive Samples)

        Column: InclSamplesPercent -- (Inclusive Samples %)

        Column: ExclSamplesPercent -- (Exclusive Samples %)

Table: Module

        Column: InclSamples -- (Inclusive Samples)

        Column: ExclSamples -- (Exclusive Samples)

        Column: InclSamplesPercent -- (Inclusive Samples %)

        Column: ExclSamplesPercent -- (Exclusive Samples %)

Table: Line

        Column: ExclSamples -- (Exclusive Samples)

        Column: ExclSamplesPercent -- (Exclusive Samples %)

Table: IP

        Column: ExclSamples -- (Exclusive Samples)

        Column: ExclSamplesPercent -- (Exclusive Samples %)

                Then just use the /difftable and /diffcolumn switches with the diff command to change the table or the column being used. Below I’ve listed an example of doing a comparison of Inclusive Samples on a per module instead of a per function basis.

vsperfreport /diff /difftable:Module /diffcolumn:InclSamples Concat.vsp StringBuilder.vsp

                Also, you have the /diffthreshold switch that can be used to set which items will actually get included in the comparison report. From here you should be able to figure out how to generate the same types of reports from the command line as you could from the IDE.

                Hopefully this article has given you a good intro to using the new performance report comparison tools in Visual Studio Team System codename Orcas. Before I go I’ll leave you with one last cool feature that we’ve added for Orcas. Say you wanted to show a co-worker the results of your performance investigations into String.Concat and StringBuilder. Just highlight those rows, hit copy and then paste into an Outlook e-mail (fig 1.6).

                Outlook

                (Figure 1.6 – Comparison data pasted into an e-mail)

                Check out that fancy HTML autopasting! Much nicer then having to look over the unformatted text.

Comments

  • Anonymous
    April 03, 2007
    Hey all, I'm rolling out a new series of articles about the new Visual Studio Team System Profiler features

  • Anonymous
    April 03, 2007
    Hey all, I&#39;m rolling out a new series of articles about the new Visual Studio Team System Profiler

  • Anonymous
    April 12, 2007
    I've just started working in the Visual Studio Team System , Profiler team. Many people are surprised

  • Anonymous
    April 13, 2007
    En primer lugar no tiene nada que ver con alguna película conocida, Y Donde Está el Piloto? o ¿Y...

  • Anonymous
    April 19, 2007
    The comment has been removed

  • Anonymous
    April 19, 2007
    The comment has been removed

  • Anonymous
    April 19, 2007
    Performance comparison using VSTS Orcas Profiler

  • Anonymous
    April 20, 2007
    The comment has been removed

  • Anonymous
    April 23, 2007
    Ian, is there some place I can upload the files to? The files are around 200 MB each. You can reach me at atulgATTHERATEOFinfosysDOTcom

  • Anonymous
    May 23, 2007
    I can't believe I haven't seen this blog article before now...I just learned about it from the Profiling

  • Anonymous
    May 24, 2007
    While I was exploring some great features of Orcas I came across this interesting great blog post that

  • Anonymous
    June 05, 2007
    Steve Carroll, the profiler dev lead, and Marc Popkin-Paine, the profiler QA lead, will be presenting

  • Anonymous
    July 17, 2007
    The Visual Studio Profiler Data Collection Control Part 1: Excluding Application Startup Time Even if

  • Anonymous
    July 17, 2007
    The Visual Studio Profiler Data Collection Control Part 1: Excluding Application Startup Time Even if

  • Anonymous
    July 24, 2007
    Ian Huff has started a new blogging series on the new profiler features in Orcas. The profiler is one

  • Anonymous
    August 07, 2007
    The new Developer menu in Visual Studio Team System In my recent series of blog posts I’ve been covering

  • Anonymous
    September 14, 2007
    I wanted to do a little tour of some debugging and profiling features that you will see in Visual Studio

  • Anonymous
    September 14, 2007
    I wanted to do a little tour of some debugging and profiling features that you will see in Visual Studio

  • Anonymous
    September 17, 2007
    [ 原文地址 ] Debugging and Profiling Features in VS 2008 [ 原文 发表时间 ] Friday, September 14, 2007 6:48 PM 我希望介绍一些在

  • Anonymous
    October 09, 2007
    I wanted to learn about the memory allocation profiling feature that is available VSTS 2008. While the

  • Anonymous
    October 19, 2007
    I’ve recently been working on a series of post on my blog about many of the cool new features that we

  • Anonymous
    November 19, 2007
    What else can I say? The big day has arrived (read the official word here on Soma's blog ) and we're

  • Anonymous
    December 04, 2007
    Como muchos ya sabr&#233;is hace poco Microsoft lanz&#243; al mercado Visual Studio 2008. Si est&#225;is

  • Anonymous
    March 09, 2008
    Como muchos ya sabréis hace poco Microsoft lanzó al mercado Visual Studio 2008. Si estáis interesados

  • Anonymous
    February 05, 2009
    [ Nacsa Sándor , 2009. január 19. – február 5.] Ez a Team System változat fejlett eszközrendszert kínál

  • Anonymous
    February 05, 2009
    &iquest;La aplicaci&oacute;n web de Geeks.ms es r&aacute;pida ? &iquest;R&aacute;pida, para qui&eacute;n

  • Anonymous
    February 06, 2009
    &iquest;La aplicaci&oacute;n web de Geeks.ms es r&aacute;pida ? &iquest;R&aacute;pida, para qui&eacute;n