Partager via


Maintaining Unique Indexes with IGNORE_DUP_KEY

A few months ago, I wrote a post describing how SQL Server maintains unique indexes while avoiding false uniqueness violations.  In this post, I'm going to look at how SQL Server maintains unique indexes that were created with the WITH IGNORE_DUP_KEY clause.  Normally, if we attempt to insert a duplicate key into a unique index, SQL Server fails the entire statement, rolls back any changes made by that statement, and returns an error.  However, if we attempt to insert a duplicate key into a unique index that was created with the WITH IGNORE_DUP_KEY clause, SQL Server merely discards the "bad" row, generates a warning message, and continues executing the remainder of the statement.  Note that SQL Server only generates the warning for actual insert statements.  Update statements that try to create a duplicate key still fail with an error.

Let's begin with a trivial insert into a unique clustered index:

CREATE TABLE T_SRC (A INT, B INT)

CREATE TABLE T_DST (A INT, B INT)
CREATE UNIQUE CLUSTERED INDEX T_DST_A ON T_DST(A) WITH IGNORE_DUP_KEY

INSERT T_DST SELECT * FROM T_SRC

This insert yields the following rather mundane query plan:

  |--Clustered Index Insert(OBJECT:([T_DST].[T_DST_A]), SET:([T_DST].[A] = [T_SRC].[A],[T_DST].[B] = [T_SRC].[B]))
       |--Top(ROWCOUNT est 0)
            |--Table Scan(OBJECT:([T_SRC]))

If you were expecting something more exciting, I'm sorry for the disappointment.  So what's going on here?  SQL Server simply tries to insert each row.  If any insertion fails due to a duplicate key, SQL Server just converts the error into a warning, discards the failed row, and continues.  Since we only have the one clustered index and no non-clustered indexes to maintain, SQL Server either inserts or does not insert each row into the single clustered index.  Either way, the integrity of the table is preserved.

Now let's create a non-clustered index and see what happens:

CREATE UNIQUE INDEX T_DST_B ON T_DST(B)

INSERT T_DST SELECT * FROM T_SRC

As it turns out, aside from adding the extra index to the clustered index insert, nothing changes:

  |--Clustered Index Insert(OBJECT:([T_DST].[T_DST_A]), OBJECT:([T_DST].[T_DST_B]) , SET:([T_DST].[A] = [T_SRC].[A],[T_DST].[B] = [T_SRC].[B]))
       |--Top(ROWCOUNT est 0)
            |--Table Scan(OBJECT:([T_SRC]))

Even with two indexes (one clustered and one non-clustered) to maintain, SQL Server continues to use the same strategy of inserting rows and waiting to see whether any of the inserts fail due to duplicate keys.  This strategy works because SQL Server always maintains the clustered index first.  If it successfully inserts the row into the clustered index, it can safely proceed to insert the row into the non-clustered index.  If it cannot insert the row into the clustered index, it can discard the row without any risk of corrupting the indexes.  Once again, either way, the integrity of the table is preserved.

It turns out that this strategy works perfectly so long as only the clustered index was created with the WITH IGNORE_DUP_KEY clause.  However, if we have even one non-clustered index that was created with the WITH IGNORE_DUP_KEY clause, this strategy no longer works.  Imagine what would happen if SQL Server were to insert a row into the clustered index only to encounter a duplicate key violation when trying to insert the same row into the non-clustered index.  At this point, SQL Server could not simply issue a warning and discard the row as this would leave the clustered and non-clustered indexes inconsistent.  Let's create a non-clustered index with the WITH IGNORE_DUP_KEY clause and see how SQL Server handles this case:

DROP INDEX T_DST.T_DST_B
CREATE UNIQUE INDEX T_DST_B ON T_DST(B) WITH IGNORE_DUP_KEY

INSERT T_DST SELECT * FROM T_SRC

Finally, an interesting query plan:

  |--Clustered Index Insert(OBJECT:([T_DST].[T_DST_A]), OBJECT:([T_DST].[T_DST_B]), SET:([T_DST].[A] = [T_SRC].[A],[T_DST].[B] = [T_SRC].[B]))
       |--Top(TOP EXPRESSION:((1)))
            |--Segment
                 |--Sort(ORDER BY:([T_SRC].[B] ASC))
                      |--Assert(WHERE:(CASE WHEN [Expr1012] IS NOT NULL THEN (0) ELSE NULL END))
                           |--Nested Loops(Left Semi Join, OUTER REFERENCES:([T_SRC].[B]), DEFINE:([Expr1012] = [PROBE VALUE]))
                                |--Top(ROWCOUNT est 0)
                                |    |--Table Scan(OBJECT:([T_SRC]))
                                |--Index Seek(OBJECT:([T_DST].[T_DST_B]), SEEK:([T_DST].[B]=[T_SRC].[B]) ORDERED FORWARD)

There are two parts of this plan that handle duplicate keys.  The first part includes the left semi-join, index seek, and assert operators.  The second part includes the sort, segment, and top operators.

For each row that we intend to insert, the left semi-join, index seek, and assert operators determine whether a matching row with the same key already exists in the non-clustered index.  If they find a matching row, they issue the duplicate key warning, discard the row, and continue.  Let's explore exactly how these operators accomplish this task.  First, the table scan returns a row to insert.  The semi-join uses the index seek on the non-clustered index to determine whether a row with the same key already exists in the index.  Recall that a left semi-join ordinarily returns rows from the left (or first) input that join with at least one row from the right (or second) input.  This semi-join is special.  It returns all rows, whether or not they join, and outputs a PROBE column that indicates whether or not each row in fact joined.  (For two other examples of a semi-join with a probe column, see my subquery posts here and here.)  Finally, the assert operator checks the value of the PROBE column.  If the index seek does not return a matching row, the new row is not a duplicate and the assert operator returns it normally.  If the index seek does return a matching row, the assert operator issues the warning and discards the row.

The sort, segment, and top operators check for duplicates in the stream of input rows.  If they find a duplicate input row, they issue the duplicate key warning, discard the duplicate row, and continue.  Again, let's explore exactly how these operators execute this task.  The sort simply ensures that any duplicate input rows are clustered one after the other in the input stream.  The segment operator compares adjacent rows checking for duplicates.  It performs two functions in this plan.  First, if it finds a duplicate row, it issues the warning.  Second, it "flags" groups of duplicate rows.  Unlike the assert operator, the segment operator does not discard any rows.  Instead, the top operator discards the duplicates.  The top operator in this plan is a special type of top known as a segment top.   A normal top returns the first N rows and then stops.  A segment top returns the first N rows from each group of duplicate rows flagged by the segment operator.  In this case, the segment top returns only the first row from each group and, thus, eliminates the duplicates.

Any rows that are not discarded by the time they reach the insert operator must be unique and can safely be inserted without triggering a uniqueness violation.

If we create multiple non-clustered indexes with the WITH IGNORE_DUP_KEY clause, one set of these extra operators is inserted for each non-clustered index.  Note also that the plan needs these additional operators only for the non-clustered index.  The clustered index is handled as described at the beginning of this post.

Finally, I want to comment briefly on the performance implications of the WITH IGNORE_DUP_KEY clause.  On a clustered index, there is no plan change and the performance implications are negligible.  Unfortunately, on a non-clustered index, all of the additional operators discussed above are costly and do result in a measurable performance reduction.

Comments

  • Anonymous
    February 05, 2008
    I tried to understand why my little simple algorithm fails and read your blog post.This is the codeCREATE TABLE #Sample
        (        SendID TINYINT,        RecvID TINYINT    )
    CREATE UNIQUE NONCLUSTERED INDEX IX_SendID ON #Sample (SendID) WITH (IGNORE_DUP_KEY = ON)CREATE UNIQUE NONCLUSTERED INDEX IX_RecvID ON #Sample (RecvID) WITH (IGNORE_DUP_KEY = ON)INSERT #Sample
        (        SendID,        RecvID    )
    SELECT v1.Number,
        v2.Number
    FROM master..spt_values AS v1INNER JOIN master..spt_values AS v2 ON v2.Type = 'p'WHERE v1.Type = 'p'
        AND v1.Number BETWEEN 1 AND 3    AND v2.Number BETWEEN 1 AND 3    AND v1.Number <> v2.Number
    ORDER BY NEWID()SELECT SendID,
    RecvID
    FROM #SampleAnd this is a result I often getSendID RecvID1 22 3The mystery is why I also don't SendID 3 and RecvID 1...
  • Anonymous
    February 06, 2008
    The result depends on the order in which the rows are considered for insertion.  Since this example has two unique indexes, the server independently checks for and eliminates duplicates from both unique columns.  Since the target table starts out empty, the only duplicates it finds are those in the initial input set.  Recall that to find duplicates in the input set, the server sorts the rows and selects only the first row from each group of rows with the same key.Suppose that after the sort on the first column, the input set has the following order:(1 2) (1 3) (2 3) (2 1) (3 1) (3 2)The server discards every other row leaving:(1 2) (2 3) (3 1)Now the server sorts on the second column:(3 1) (1 2) (2 3)And since there are no duplicates on the second column, it inserts all three rows.Now suppose that the after the sort on the first column, the last two rows are reversed:(1 2) (1 3) (2 3) (2 1) (3 2) (3 1)This is still a valid sort order, but the server discards a different row leaving this set instead:(1 2) (2 3) (3 2)Again, the server sorts on the second column:(1 2) (3 2) (2 3)This time there is a duplicate on the second column, so the server discards it and inserts only these two rows:(1 2) (2 3)You can try out different input set orders manually by using the following syntax:INSERT #Sample SELECT 1, 2   UNION ALL SELECT 1, 3   UNION ALL SELECT 2, 3   UNION ALL SELECT 2, 1   UNION ALL SELECT 3, 2   UNION ALL SELECT 3, 1
  • Anonymous
    February 06, 2008
    It seems that the query engine randomly selects which unique index to begin with. Fair enough.I have now made a blog post (http://weblogs.sqlteam.com/peterl/archive/2008/02/06/Curiosity-found.aspx) about this mystery where I have added more code to demonstrate the original conundrum.For me the correlation between RowID and z is as expected, but I can't understand why RowID 1 is missing.
  • Anonymous
    May 09, 2015
    great post :-)