Udostępnij za pośrednictwem


Troubleshooting Package Performance

Integration Services includes features and tools that you can use to troubleshoot the performance of packages. For example, logging captures run-time information about packages, and performance counters let you monitor the data flow engine. This topic provides information about these features, along with suggestions for designing packages to avoid common performance issues.

Designing the Data Flow for Better Performance

You can troubleshoot the performance of data flows in packages by testing the results of the following suggestions in your package:

Optimize Queries

A number of data flow components use queries, either when extracting data from sources, or in lookup operations to create reference tables. The default query uses the SELECT * FROM <tableName> syntax. This type of query returns all of the columns in the source table. Having all of the columns available at design time makes it possible to choose any column as a lookup, pass-through, or source column. However, after you choose the columns you want to use, you should revise the query to include only the columns that are used. You can type the query or use Query Builder to construct the query. Removing superfluous columns makes the data flow in a package more efficient, because the smaller a row is, the more rows can fit into one buffer, and the less work it is to process all of the rows in the dataset.

Note

When you run a package in Business Intelligence Development Studio, the Progress tab of SSIS Designer lists warnings, including a warning for any data column that a source makes available to the data flow, but that is not subsequently used by downstream data flow components. You can use the RunInOptimizedMode property to remove these columns automatically.

Configure the Properties of the Data Flow Task

You can configure the following properties of the Data Flow task, all of which affect performance:

  • Specify the locations for temporary storage of buffer data (BufferTempStoragePath property) and of columns that contain binary large object (BLOB) data (BLOBTempStoragePath property). By default, the value of this property is the value of the TEMP environment variable. You may want to specify other folders to place the temporary files on a different hard disk drive, or to spread them across multiple drives. You can specify multiple directories by delimiting the directory names with semicolons.

  • Define the default size of the buffer that the task uses, by setting the DefaultBufferSize property, and define the maximum number of rows in each buffer, by setting the DefaultBufferMaxRows property. The default buffer size is 10 megabytes, with a maximum buffer size of 100 megabytes. The default maximum number of rows is 10,000.

  • Set the number of threads that the task can use during execution, by setting the EngineThreads property. This property provides a suggestion to the data flow engine about the number of threads to use. The default is 5, with a minimum value of 2. However, the engine will not use more threads than it needs, regardless of the value of this property. The engine may also use more threads than specified in this property, if necessary to avoid concurrency issues.

  • Indicate whether the Data Flow task runs in optimized mode (RunInOptimizedMode property). Optimized mode improves performance by removing unused columns, outputs, and components from the data flow.

    Note

    A property with the same name, RunInOptimizedMode, can be set at the project level in Business Intelligence Development Studio to indicate that the Data Flow task runs in optimized mode during debugging. This project property overrides the RunInOptimizedMode property of Data Flow tasks at design time.

Understanding How the Data Flow Task Sizes the Buffers

The data flow engine begins the task of sizing its buffers by calculating the estimated size of a single row of data. Then it multiplies the estimated size of a row by the value of DefaultBufferMaxRows to obtain a preliminary working value for the buffer size.

  • If the result is more than the value of DefaultBufferSize, the engine reduces the number of rows.
  • If the result is less than the internally-calculated minimum buffer size, the engine increases the number of rows.
  • If the result falls between the minimum buffer size and the value of DefaultBufferSize, the engine sizes the buffer as close as possible to the estimated row size times the value of DefaultBufferMaxRows.

Adjusting the Sizing of Buffers

When you begin testing the performance of your data flow tasks, use the default values for DefaultBufferSize and DefaultBufferMaxRows. Enable logging on the data flow task, and select the BufferSizeTuning event to see how many rows are contained in each buffer.

Before you begin adjusting the sizing of the buffers, the most important improvement that you can make is to reduce the size of each row of data by removing unneeded columns and by configuring data types appropriately.

When sufficient memory is available, you should use a smaller number of large buffers, rather than a larger number of small buffers. In other words, you can improve performance by reducing the total number of buffers required to hold your data, and by fitting as many rows of data into a buffer as possible. To determine the optimum number of buffers and their size, experiment with the values of DefaultBufferSize and DefaultBufferMaxRows while monitoring performance and the information reported by the BufferSizeTuning event.

Note

The properties of data flow tasks discussed in this section must be set separately for each data flow task in a package.

Avoid Unnecessary Sorting

Sorting is inherently a slow operation, and avoiding unnecessary sorting can enhance the performance of the package data flow.

If the source data is sorted, either because the SELECT query uses an ORDER BY clause or because the data was inserted into the source in sorted order, you can provide a hint that the data is sorted and avoid using a Sort transformation to satisfy the sorting requirements of certain downstream transformations. For example, the Merge and Merge Join transformations require sorted inputs. To provide a hint, you set the IsSorted property on the output of an upstream data flow component to True and specify the sort key columns on which the data is sorted. For more information, see How to: Set Sort Attributes on an Output.

If you have to sort the data in the data flow, you can improve performance by designing the data flow to use as few sort operations as possible. For example, if the data flow uses a Multicast transformation to copy the dataset, you can sort the dataset once before the Multicast transformation does its work, instead of sorting multiple outputs after the transformation.

For more information, see Sort Transformation, Merge Transformation, Merge Join Transformation, and Multicast Transformation.

Optimize the Slowly Changing Dimension Transformation

The Slowly Changing Dimension Wizard and the Slowly Changing Dimension transformation are general-purpose tools that meet the needs of most users. However, the data flow generated by the wizard is not optimized for performance.

In the Slowly Changing Dimension transformation, the slowest components are typically the OLE DB Command transformations that perform UPDATEs against a single row at a time. You can most often improve the performance of the Slowly Changing Dimension transformation by replacing these OLE DB Command transformations with destination components. These destination components save all rows to be updated to a staging table. You can then add an Execute SQL task that performs a single, set-based Transact-SQL UPDATE against all rows at the same time.

Advanced users can design a custom data flow for slowly changing dimension processing that is optimized for large dimensions. For a discussion and example of this approach, see the section, "Unique dimension scenario," in the Microsoft white paper, Project REAL: Business Intelligence ETL Design Practices.

Optimize Aggregations in the Aggregate Transformation

The Aggregate transformation includes a number of properties that you can use to improve its performance. If you know the exact or approximate number of key values in the dataset, you can set the Keys and KeysScale properties. You can also specify the exact and approximate number of keys that the transformation is expected to handle for a COUNT DISTINCT operation by setting the CountDistinctKeys and CountDistinctScale properties. By using these properties, the transformation avoids reorganizing cached totals and improves performance.

If you need to create multiple aggregations in a data flow, you should consider creating multiple aggregations using one Aggregate transformation instead of creating multiple transformations. This approach improves performance especially when the aggregations are subsets of other aggregations, because the transformation can optimize internal storage and scan incoming data only once. For example, if an aggregation uses a GROUP BY clause and an AVG aggregation, combining them into one transformation can improve performance. However, you should consider this approach only if memory is a constraint, because performing multiple aggregations within one Aggregate transformation serializes the aggregation operations.

For more information, see Aggregate Transformation.

Configure Buffer Throttling in the Merge Join Transformation

The Merge Join transformation includes the MaxBuffersPerInput property, which specifies the maximum number of buffers that can be active for each input at one time. You can use this property to tune the amount of memory that the buffers consume and consequently the performance of the transformation. The larger the number of buffers, the more memory the transformation uses and the better the performance is. The default value of MaxBuffersPerInput is 5, which is the number of buffers that works well in most scenarios. To tune performance, you may want to try using a slightly different number of buffers such as 4 or 6. If possible, you should avoid using a very small number of buffers. For example, setting MaxBuffersPerInput to 1 instead of 5 has a significant impact on performance. Also, you should not set MaxBuffersPerInput to 0. This value means that no throttling occurs, and depending on the data load and the amount of memory available, the package may not complete.

To avoid a deadlock, the Merge Join transformation may temporarily increase the number of buffers it uses beyond the value of MaxBuffersPerInput. After the deadlock condition is resolved, MaxBuffersPerInput returns to its configured value.

For more information, see Merge Join Transformation.

Test the Performance of Destinations

You may find that saving data to destinations takes more time than expected. To identify whether the slowness is caused by the inability of the destination to process data quickly enough, you can temporarily replace the destination with a Row Count transformation. If the throughput improves significantly, it is likely that the destination that is loading the data is causing the slowdown. For more information, see Row Count Transformation.

Monitoring the Performance of the Package

Integration Services includes tools and features that you can use to monitor the performance of a package. Use the following suggestions to determine the parts of the package that is having the greatest effect on performance:

Review the Information on the Progress Tab

SSIS Designer provides information about both control flow and data flow when you run a package in Business Intelligence Development Studio. The Progress tab lists tasks and containers in order of execution and includes start and finish times, warnings, and error messages for each task and container, including the package itself. It also lists data flow components in order of execution and includes information about progress, displayed as percentage complete, and the number of rows processed.

Configure Logging in the Package

Integration Services includes a variety of log providers that allow packages to log information at run time to different types of files, or to SQL Server. You can enable log entries for packages and for individual package objects such as tasks and containers. Integration Services includes a wide variety of tasks and containers, and each task and container has its own set of descriptive log entries. For example, a package that includes an Execute SQL task can write a log entry that lists the SQL statement that the task executed, including parameter values for the statement.

The log entries include information such as the start and finish times of packages and package objects, making it possible to identify slow running tasks and containers. For more information, see Logging Package Execution, Implementing Logging in Packages, and Custom Messages for Logging.

Configure Logging for Data Flow Tasks

The Data Flow task provides many custom log entries that can be used to monitor and adjust performance. For example, you can monitor components that might cause memory leaks, or keep track of how long it takes to run a particular component. For a list of these custom log entries and sample logging output, see Data Flow Task.

Monitor the Performance of the Data Flow Engine

Integration Services includes a set of performance counters for monitoring the performance of the data flow engine. For example, you can track the total amount of memory, in bytes, that all buffers use and check whether components are out of memory. A buffer is a block of memory that a component uses to store data. For more information, see Monitoring Performance of the Data Flow Engine.

See Also

Tasks

Troubleshooting Package Development

Concepts

Troubleshooting Package Execution
Troubleshooting the Integration Services Service

Help and Information

Getting SQL Server 2005 Assistance

Change History

Release History

15 September 2007

Changed content:
  • Added new section about optimizing the Slowly Changing Dimension transformation.

17 July 2006

Changed content:
  • Added new sections about tuning the performance of destinations, and ways to use logging.

5 December 2005

Changed content:
  • Added section about buffer throttling in the Merge Join transformation.