Udostępnij za pośrednictwem


Purging Data

This post is about deleting a lot of rows, as you might do in a data archiving or purging task. By “a lot of rows”, I mean anything from a few thousand rows up to billions. This may seem elementary, but there are some surprising problems that can appear if you use a simple DELETE to delete a large volume of data. Below are some best practices to consider. (Note that some, but not all, of these suggestions also apply to large UPDATE or INSERT operations.)

TRUNCATE When You Can

The first, and simplest, best practice is to use TRUNCATE instead of DELETE when possible. TRUNCATE will only log extent or page deallocation operations, while a DELETE is a row-by-row operation and must log the pre-delete image of every deleted row. TRUNCATE runs nearly instantaneously, even on very large tables that might take days or hours to clear out with a DELETE statement. But of course TRUNCATE can only be used when you need to delete all of the data in a table. 

 

Consider Partitioning

If you have Enterprise or Datacenter Edition and are running SQL Server 2005 or later, you may be able to partition your data. With an appropriately-selected partitioning scheme, you can simply switch out the old data. The cost of this is generally similar to a TRUNCATE or DROP TABLE – in other words, nearly instantaneous. You can learn more about SQL Server partitioning in this whitepaper by Kim Tripp. 

 

Delete in Chunks

Most of the time you only want to delete a subset of the rows in a table. If you can’t use partitioning, you must use DELETE. Suppose that you needed to delete a lot of data with a straightforward DELETE statement like this one: 

 

 

    DELETE FROM big_table
    WHERE ready_to_archive = 1;

 

 

This could cause two problems. The first is that the DELETE will acquire exclusive locks on all deleted rows, and those locks will be held until the DELETE completes. If the DELETE runs for hours, other users trying to query or modify the table may be blocked for hours. If more than a few thousand locks are acquired, SQL may escalate to an X table lock that will prevent anyone else from modifying any row in the table, even rows that will not be deleted.  

 

 

The second issue that you might see is transaction log growth. If your database is in simple recovery mode, under normal circumstances a relatively small .LDF file will be sufficient because the space in the log file is frequently reused. But SQL can only reuse space in a transaction log prior to the region that records the oldest uncommitted transaction. The long-running DELETE may cause your .LDF file(s) to grow. If the file grows too large or if you have disabled autogrow you will eventually run out of log space, bringing all activity in the database to a halt. 

 

 

You can avoid both of these problems by deleting the rows in chunks, like this:

 

    DECLARE @rows_affected BIGINT;
    DECLARE @delete_batch_size INT;
    SET @delete_batch_size = 1000;
    SET @rows_affected = @delete_batch_size;

    WHILE (@rows_affected = @delete_batch_size)
    BEGIN
        DELETE TOP (@delete_batch_size) FROM big_table
        WHERE ready_to_archive = 1;
        SET @rows_affected = @@ROWCOUNT;
    END;

 

 

This will delete 1000 rows at a time, continuing until there are no more rows to delete. Each batch of 1000 rows is deleted in a separate transaction, so the batch can run for a long time without holding locks for a long time or causing transaction log growth. You can experiment with different batch sizes, but in my experience a batch size above 1000 rows doesn’t improve performance that much. 

 

If you want to avoid transaction log growth but don’t care about reduced concurrency (for example, if the DELETE is running during a maintenance window when no one else will need to query the table), you may see a small performance boost by deleting in chunks but forcing a table lock on each DELETE. The cost of lock management and other housekeeping tasks is reduced when SQL doesn’t have to worry about concurrency. A TABLOCK hint also allows SQL to reclaim empty pages that would otherwise remain allocated to the table. 

 

    ...
    WHILE (@rows_affected = @delete_batch_size)
    BEGIN
        DELETE TOP (@delete_batch_size) FROM big_table WITH (TABLOCK)
        WHERE ready_to_archive = 1;
        SET @rows_affected = @@ROWCOUNT;
    END;

 

On the other hand, if you don’t care about transaction log growth but you do want to prevent an unnecessary table lock escalation, you can use trace flag 1224:  

    DBCC TRACEON (1224);

    DELETE FROM big_table
WHERE ready_to_archive = 1;

    DBCC TRACEOFF (1224);

 

 Trace flag 1224 is documented and supported. It prevents SQL from escalating to table locks except when it must because the server is running low on memory. This may also be a good compromise solution if you want to maximize concurrency but you require that the entire delete operation be transactional.

 

Consider Indexes and Foreign Keys

If your DELETE statement is running slower than you expect, it might make sense to drop indexes on the table and/or any foreign keys referring to the table, do the DELETE, then recreate the objects you dropped. Indexes on a table can dramatically slow down a large DELETE, even to the point where it is cheaper to rebuild the index from scratch after the DELETE than it would be to maintain the index in place. (Note that sometimes you clearly will see the extra index maintenance operators in the query plan, and other times the query plan won’t show this work. But just because you don’t see any Index Delete operators in the plan doesn’t mean that the DELETE’s performance isn’t affected by index maintenance overhead. For details, see my earlier post on Wide vs. Narrow Plans.) Foreign keys on other tables that refer to the table that you’re deleting from can also slow down the DELETE. The FK overhead comes from the fact that SQL must verify that you’re not deleting any rows that are referenced by rows in the child table.  

 

It’s difficult to generalize about when it might be cost effective to drop indexes or FKs. The effect on the overall runtime of the delete job depends on how many indexes are on the table, the size of the indexes, whether any FK columns in other tables are indexed, the portion of the table you’re deleting, whether there is correlation between nonclustered and clustered index keys values, etc. Personally, I’d consider testing out the approach and quantifying the benefits when all of the following are true:

  1. Your DELETE query runs slower than you would like
  2. Temporarily dropping indexes or FKs won’t hurt anything (for example, if the DELETE runs in a maintenance window when no one else is using the database)
  3. You are deleting a fairly large % of the rows in the table (rule of thumb: > 10% of the rows)

Comments

  • Anonymous
    June 03, 2010
    Great post, (Just added your RSS feed to my google reader's Elite category).I definitely have to look into the QUERYTRACEON 1224.I just posted something (<a href="michaeljswart.com) that touches on a lot of the methods you mention here. Specifically the batching technique you use and the struggle for maximized concurrency.Also a previous post I wrote mentions indexing for foreign keys (wrt deletes) <a href="michaeljswart.comCheers,
  • Anonymous
    June 19, 2010
    Hi,I do see a connect item for QUERYTRACEON is undocumented.Are there any plans to document any time soon?
  • Anonymous
    June 20, 2010
    @Mohammedu - Thank you for pointing this out.  I'm not aware of any plans to document QUERYTRACEON.  (I've changed the example to use DBCC TRACEON.)
  • Anonymous
    July 12, 2011
    It's a nice one..i like all suggesstions..!!!