Share via


SERIALIZABLE isolation level (serialized transactions) and duplicate key insertion attempts

I ran into an interesting scenario with a customer this past week that behaved a bit differently than I expected (not having ever verified the behavior before, or even considering it I guess) at first. After spending some time thinking about it, I think I arrived at a reasonable explanation for the behavior...

In this scenario, we're going to be working with 2 concurrent transactions running in a SERIAZABLE isolation level. If you haven't worked with serialized transactions before, they differ from the other isolation levels primarily in the fact that they introduce RANGE level locks that aren't typically used with other levels of isolation. For example, when using a SERIALIZED isolation and querying a table with an integer column called 'id' for records between 10 and 20, this read will in effect 'lock' the entire range of keys in that column from 10 thru 20 inclusive and not allow any modification of data in said range. Even when using the REPEATABLE READ isolation level, the entire range of records isn't locked, only the keys that already exist within that range are locked. 

For example, assume the table has records with 'id' values of 10,11,14,18, and 20...the query in question would return these 5 records to the client. If running in the REPEATABLE READ level of isolation, you wouldn't be able to modify any of the records with the key values of 10,11,14,18, and 20 (via any write-based operation); however, when running in the SERIAZABLE level of isolation, not only wouldn't you be able to modify those key values, but you also wouldn't be able to modify (again, via any write-based operation) any key value in the RANGE of 10 thru 20 inclusive. You ARE allowed to read those values however...

Ok, so now that we've gotten through that simple explanation (which will be demonstrated in a script below), what about the behavior that surprised me? Well, given that you can read values within a protected serialized range of data from a separate concurrent serialized transaction, I originally expected that if from the separate connection you in fact tried to modify the given protected range of keys (via an insert for example in this case) in a manner that would throw a duplicate key error (or some other form of data-integrity related exception), that you wouldn't be blocked but would immediately receive the exception. In reality, the modification statement will block immediately, and the duplicate key error isn't thrown until the statement becomes freed via a commit/rollback at the blocking connection.

I've got a script below to reproduce/demonstrate the behavior, but how about the explanation of the behavior first...well, if you were to perform a data modification statement that acted on multiple rows (for example, an insert of multiple rows of data), where some of the rows would succeed and some would fail (due to a duplicate key failure for example) then the successful rows are STILL applied (assuming you commit the transaction, and you don't have the XACT_ABORT option on). In this case, some of the data modifications would possibly still affect the range, and hence shouldn't be allowed.

Anyhow, if anyone has additional thoughts/experiences, by all means send them along...demo code to repro is below...

Chad Boyd ~~~ This posting is provided "AS IS" with no warranties, and confers no rights. Use of any included script samples are subject to the terms specified at https://www.microsoft.com/info/cpyright.htm.

----------------------------------------------------------------------
------------------ CODE ONLY BELOW ------------------
----------------------------------------------------------------------

use tempdb;
set nocount on;
go

-- Create test objects...
if object_id('dbo.t1') > 0
 drop table dbo.t1;
go
create table t1 (pkid int, colA varchar(100), fillerCol char(50));
go
create unique clustered index ix_t1_c_pkid on dbo.t1 (pkid);
create nonclustered index ix_t1_nc_colA on dbo.t1 (colA);
go

-- Add some data...
with cte as (
 select top 102
  row_number() over (order by a.object_id) as id,
  cast(row_number() over (order by a.object_id) as varchar(10)) + ' - ' + cast(newid() as varchar(50)) as colA,
  'blah' as fillerCol
 from sys.columns a
 cross join sys.columns b
)
insert dbo.t1 (pkid, colA, fillerCol)
select *
from cte
where id not in(4,8) -- Don't insert id's 4 or 8...
go

-- CONNECTION 1
set transaction isolation level serializable;
begin tran;
select *
from dbo.t1 a
where a.pkid between 1 and 10;

-- CONNECTION 2
set transaction isolation level serializable;
begin tran;
select *
from dbo.t1 a
where a.pkid between 1 and 10;
go

-- Reads still work fine as expected (i.e. no blocking)...

-- Now try writing in CONNECTION 2 (keep original xact open)...
-- CONNECTION 2
update dbo.t1
set  fillerCol = 'blah1'
where pkid = 5;

-- Blocks (as expected)...

-- Stop the update above and instead try the following in CONNECTION 2:
-- CONNECTION 2
update dbo.t1
set  fillerCol = 'blah1'
where pkid = 15;

-- Works as expected...not within the protected range...

-- Try inserting data with a duplicate key...
-- CONNECTION 2...
insert dbo.t1 (pkid, colA, fillerCol)
select 3, '3 - test_repeatableReadInsert', 'blah3';

-- Blocks despite the fact that it would fail with a duplicate key failure if it wasn't blocked...
-- This is the same behavior as expected if the pkid value wasn't a duplicate...
-- Commit the transaction on CONNECTION 1 and see the CONNECTION 2 throw a duplicate key failure...

-- CONNECTION 1
commit transaction;

-- Once done, cleanup...
-- BOTH CONNECTIONS
while @@trancount > 0
 rollback tran