Knowing about 'Forwarded Records' can help diagnose hard to find performance issues.
Imagine a customer using an ISV application that stores certain product information in a varchar(200) column in a SQL Server database table. When the system was first being used nobody ever entered a product description with more than 10 characters. However, over time new products were added to the portfolio which required the introduction of a prefix in the description. On the SQL level this was done by running an update which added the appropriate prefix. Additionally, the company merged with another one and the product description entries changed again and became on average 35 characters long. Again an update statement was used to implement the changes. Of course since the column was defined as a varchar(200) there was no problem storing the changed values.
Unfortunately, the customer found a significant, unexplainable slowdown of the system. Why did this happen?
- The updates hadn't changed any of the indexes on the table.
- The product table was used very heavily but an investigation of the user queries proved that the query plans were correct and hadn't changed over time.
- There were no hardware issues and the number of concurrent users hadn't increased.
- The table size difference due to new records shouldn't have had a huge impact because the queries always selected a limited number of rows using an index.
So what was left as a potential root cause? A closer look at several performance counters indicated that there were an unusually high number of logical reads on the product table.
And the reason for this:
Due to the new product descriptions, the row size increased. Updates of rows led to forward pointers because the rows didn't fit into their old page slots any more. This phenomena leaves the data row pointers in the indexes unchanged but adds forward pointers in the data table ( heap ). If the new row gets too big SQL Server will move it to a new page slot and leave a pointer at its original spot. Therefore, looking for a row will be more expensive afterwards. A lookup of the data row in a heap is no longer a direct access using the page and slot address. Instead of getting the data row the server might have to follow a forward pointer first.
With SQL Server 2000 I used the dbcc showcontig command to prove the theory. But you have to use the option "with tableresults" to get the info about "ForwardedRecords". In the default dbcc showcontig output you won't be albe to recognize the issue.
SQL Server 2005 on the other hand offers a DMV ( sys.dm_db_index_physical_stats() ) which shows information regarding "ForwardedRecords" ( column "forwarded_record_count" in the result set ). In both cases the number of rows in the output can be a little bit misleading as it's the sum of the "real" number of rows and
the number of the "Forwarded Records". A select count(*) on the table still returns the "real" number which you expect.
Solution:
Once a data table ( heap ) includes forward pointers there is only one way to get rid of them : table reorg. There are a few options to do this:
- The simplest one would be to create a clustered index on the data table and drop it again.
- But there is another option to avoid forward pointers entirely; by creating a clustered index from the beginning.
A clustered index keeps the data rows in its leaf node level. Therefore the data is always sorted according to the index keys and forward pointers won't be used. It's like a continuous online reorg in this regard. The SQL script below shows the difference between using a data table ( heap ) and a clustered index as well as the difference between char() and varchar().
Result:
The SQL script below shows that the performance degradation might be massive ( dependent of course on a lot of parameters like the specific query, memory, how many forward pointers, and so on ). On an internal test system using a 64bit CPU, the difference in this extreme case using about 1 million rows in the table was almost factor 10. The usage of a clustered index will solve the issue. Otherwise - depending on the update activity - a reorg might be necessary from time to time.
Running the repro script:
Below you will find a SQL script below which can be used to reproduce the forward pointer condition on SQL Server 2005 and/or SQL Server 2000. You just have to modify the name of sys.objects in the first stored procedure and later on use either dbcc showcontig for SQL Server 2000 or the new DMV in SQL Server 2005 to check if "Forwarded Records" exist.
I chose the update scenario as an example. I would recommend turning 'max degree of parallelism' off. If SQL Server can use enough memory I would also suggest to run the test query a few times in order to avoid any disk IO. The sample below generates about 1 million rows in the test table. Depending on the hardware the whole test might take 30 minutes or even more. To reduce runtime just reduce the number of loops which generate the data. Using only seven loops I got 128,000 rows in the test table. Even on a slower machine this should be then a matter of seconds ( on my test box about 10 seconds to generate the data and 30 seconds to run the updates ). There is a disadvantage though. The response time increase thru forward pointers doesn't look so bad any more. But that's often a common issue with single user performance tests. Therefore focus on the number of logical reads. There you will still see a huge difference. That's generally good advice during particularly during the development phase where you might not run sophisticated stress tests yet.
The response time in Query Analyzer or Management Studio might look ok even if the number of logical reads
is very high because you are the only user on a test server. But later on in the production system with much more data and much more concurrent users it could become a severe bottleneck. It should always be a goal to keep the number of logical reads as low as possible.
Create the following two stored procedures and then run the SQL statements
below step by step one after the other.
-- procedure to create and fill the test table
drop procedure reset_forwardPointerTest
go
create procedure reset_forwardPointerTest
( @heap_or_clusteredIndex int, @fix_or_var int, @init_null_or_big int, @number_loops int )
as
begin
declare @SQLInsert nvarchar(1000);
declare @ParamDefinition nvarchar(500);
declare @counter int
declare @exist int
set statistics profile off
set statistics io off
set statistics time off
set @exist = ( select count(*) from sys.objects
where name = 'forwardPointer' and type = 'U' )
-- SQL Server 2000
-- set @exist = ( select count(*) from sysobjects
-- where name = 'forwardPointer' and type = 'U' )
if( @exist = 1 )
drop table forwardPointer
if( @fix_or_var = 1 )
create table forwardPointer (
id int,
col1 varchar(50),
col2 varchar(50),
col3 varchar(50),
col4 varchar(50),
col5 varchar(50),
col6 varchar(50),
col7 varchar(50),
col8 varchar(50),
col9 varchar(50),
col10 varchar(50) )
else
create table forwardPointer (
id int,
col1 char(50),
col2 char(50),
col3 char(50),
col4 char(50),
col5 char(50),
col6 char(50),
col7 char(50),
col8 char(50),
col9 char(50),
col10 char(50) )
IF( @init_null_or_big = 1 )
SET @SQLInsert =
N'insert into forwardPointer values ( @p01,
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'',
''12345678901234567890123456789012345678901234'' )'
ELSE
SET @SQLInsert =
N'insert into forwardPointer values ( @p01,NULL,NULL,NULL,NULL,NULL,
NULL,NULL,NULL,NULL,NULL )'
SET @ParamDefinition = N'@p01 integer'
-- insert 1000 rows into the test table
set @counter = 1000
while @counter > 0
begin
EXECUTE sp_executesql @SQLInsert, @ParamDefinition, @p01 = @counter
set @counter = @counter - 1
end
-- replicate the 1000 rows <n> times to get more test data
if( @number_loops > 10 )
set @number_loops = 10
if( @number_loops < 1 )
set @number_loops = 1
set @counter = @number_loops
while @counter > 0
begin
insert into forwardPointer select * from forwardPointer
set @counter = @counter - 1
end
-- two test cases -> heap or clustered index
if ( @heap_or_clusteredIndex = 0 )
create index fPindex01 on forwardPointer ( id )
else
create clustered index CfPindex01 on forwardPointer ( id )
end;
go
-- procedure to run a set of updates on the test table
drop procedure run_updates
go
create procedure run_updates
as
begin
set statistics profile off
set statistics io off
set statistics time off
update forwardPointer set
col1 = '1234',
col2 = '1234',
col3 = '1234',
col4 = '1234',
col5 = '1234',
col6 = '1234',
col7 = '1234',
col8 = '1234',
col9 = '1234',
col10 = '1234'
update forwardPointer set
col1 = '1234567890123456',
col2 = '1234567890123456',
col3 = '1234567890123456',
col4 = '1234567890123456',
col5 = '1234567890123456',
col6 = '1234567890123456',
col7 = '1234567890123456',
col8 = '1234567890123456',
col9 = '1234567890123456',
col10 = '1234567890123456'
update forwardPointer set
col1 = '1234567890123456789012345678',
col2 = '1234567890123456789012345678',
col3 = '1234567890123456789012345678',
col4 = '1234567890123456789012345678',
col5 = '1234567890123456789012345678',
col6 = '1234567890123456789012345678',
col7 = '1234567890123456789012345678',
col8 = '1234567890123456789012345678',
col9 = '1234567890123456789012345678',
col10 = '1234567890123456789012345678'
update forwardPointer set
col1 = '12345678901234567890123456789012345678901234',
col2 = '12345678901234567890123456789012345678901234',
col3 = '12345678901234567890123456789012345678901234',
col4 = '12345678901234567890123456789012345678901234',
col5 = '12345678901234567890123456789012345678901234',
col6 = '12345678901234567890123456789012345678901234',
col7 = '12345678901234567890123456789012345678901234',
col8 = '12345678901234567890123456789012345678901234',
col9 = '12345678901234567890123456789012345678901234',
col10 = '12345678901234567890123456789012345678901234'
end
go
-- run test
-- ========
-- turn on statistics output
set statistics profile on
set statistics io on
set statistics time on
-- Create and fill test table as a heap ( no clustered index ) and
-- use data type 'char(50)' to get fixed-length. Default value = 'NULL'.
-- Replicate 1000 test rows 10 times -> 1 million rows
reset_forwardPointerTest 0,0,0,10
-- check number rows in test table
select count(*) from forwardPointer
-- run a set of updates on the test table
run_updates
-- run the test query a few times to make sure that everything
-- is cached in the buffer pool and look at the response time.
-- To verify that there is no disk IO look at the # of physical
-- reads and read-ahead reads under the "Messages" tab. Both values
-- should be '0'. But the number of logical reads should be pretty
-- high - depending on the number of rows in the table -> as it has
-- to scan every data row.
-- The query plan should show a table scan as there is no index
-- on 'col1'. There is no result set as the value in the where
-- clause doesn't exist in the table.
select max(id) from forwardPointer where
col1 = '1234567890123456'
-- Create and fill test table as a heap ( no clustered index ) and
-- use data type 'varchar(50)' to get variable-length.
-- Default value = '12345678901234567890123456789012345678901234'.
-- Replicate 1000 test rows 10 times -> 1 million rows
reset_forwardPointerTest 0,1,1,10
-- check number rows in test table
select count(*) from forwardPointer
-- as the table was already initialized with 'big' values we
-- won't run any updates.
-- run the test query a few times to make sure that everything
-- is cached in the buffer pool and look at the response time.
-- To verify that there is no disk IO look at the # of physical
-- reads and read-ahead reads under the "Messages" tab. Both values
-- should be '0'. But the number of logical reads should be pretty
-- high - depending on the number of rows in the table -> as it has
-- to scan every data row.
-- The query plan should show a table scan as there is no index
-- on 'col1'. There is no result set as the value in the where
-- clause doesn't exist in the table.
-- Important -> the response time should be similar to the fixed-length
-- case above. Also the # of logical reads should be similar. It will be
-- a little bit less as the values are not exactly 50 bytes long.
-- Therefore it will take less space with varchar(50) than with char(50).
select max(id) from forwardPointer where
col1 = '1234567890123456'
-- Create and fill test table as a heap ( no clustered index ) and
-- use data type 'varchar(50)' to get variable-length.
-- Default value = 'NULL'.
-- Replicate 1000 test rows 10 times -> 1 million rows
reset_forwardPointerTest 0,1,0,10
-- check number rows in test table
select count(*) from forwardPointer
-- run a set of updates on the test table. This should force
-- forward pointers now because the default value 'Null' will
-- be updated to 'bigger' values ( numbers with <n> digits ).
-- The final values are the same as they were used as the default
-- in the test before. So you wouldn't see a difference in the
-- result of a select * from forwardPointer.
-- WARNING : this is the longest running step of the test.
-- Depending on the hardware it might take some minutes.
-- On an internal test system with a 64bit CPU it
-- took 5 minutes. Workaround - reduce the number of
-- loops for generating the data. This would reduce
-- the # of rows.
run_updates
-- run the test query a few times to make sure that everything
-- is cached in the buffer pool and look at the response time.
-- To verify that there is no disk IO look at the # of physical
-- reads and read-ahead reads under the "Messages" tab. Both values
-- should be '0'. But the number of logical reads should be pretty
-- high - depending on the number of rows in the table - as it has
-- to scan every data row.
-- The query plan should show a table scan as there is no index
-- on 'col1'. There is no result set as the value in the where
-- clause doesn't exist in the table.
-- IMPORTANT -> the table still has the same # of rows after the
-- updates as it had before and in the other tests. But now you
-- should see a significant difference in response time and # of
-- logical reads. It got much slower. On an internal test system
-- using a 64bit CPU it was almost factor 10. The root cause for
-- this symptom are forwarded rows due to the updates.
select max(id) from forwardPointer where
col1 = '1234567890123456'
-- check the forwarded_record_count column.
-- It should show a number > 0 ( depends on the # rows in the table )
SELECT forwarded_record_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID(N'forwardPointer'), 0, NULL , 'DETAILED')
WHERE INDEX_LEVEL = 0;
-- SQL Server 2000
-- look for the column "ForwardedRecords"
dbcc showcontig ( 'forwardPointer' ) with tableresults
-- Create and fill test table with a clustered index and
-- use data type 'varchar(50)' to get variable-length.
-- Default value = 'NULL'.
-- Replicate 1000 test rows 10 times -> 1 million rows.
reset_forwardPointerTest 1,1,0,10
-- run a set of updates on the test table. Due to the clustered
-- index we won't get forward pointers now. It's also the reason
-- why the updates run much faster than in the test before. On
-- an internal test system with a 64bit CPU it was about five
-- times faster than with a heap and a "normal" index.
run_updates
-- run the test query a few times to make sure that everything
-- is cached in the buffer pool and look at the response time.
-- To verify that there is no disk IO look at the # of physical
-- reads and read-ahead reads under the "Messages" tab. Both values
-- should be '0'. But the number of logical reads should be pretty
-- high - depending on the number of rows in the table - as it has
-- to scan every row.
-- The query plan should show a "Clustered Index Scan" as col1 is
-- not part of the index key. There is no result set as the value
-- in the where clause doesn't exist in the table.
-- IMPORTANT -> the response time and the # of logical reads will
-- look a little bit worse than the fixed-length test and the
-- varchar test without updates. But it should be a huge difference
-- to the forward pointer case.
select max(id) from forwardPointer where
col1 = '1234567890123456'
-- check the forwarded_record_count column
-- it won't show any numbers now because of
-- the usage of a clustered index
SELECT forwarded_record_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID(N'forwardPointer'), 1, NULL , 'DETAILED')
WHERE INDEX_LEVEL = 0;
-- SQL Server 2000
-- look for the column "ForwardedRecords"
dbcc showcontig ( 'forwardPointer' ) with tableresults
Comments
- Anonymous
May 27, 2008
Table A 2 indexes, neither was a clustered index In a 22 Million row table, with address1,2,3 each varchar(50), data1 and data 2 each varchar (100)- all 5 columns set to NULL. After all 5 columns were updated, required over 8 hours just to "select * into" another table, tableB from TableA. However, "Select into TableC " from Table B was less than 5 minutes. TableA has forward records 7,548,908 while TableB had none. Thanks for the article.