다음을 통해 공유


SQL 2005: Making sense of the new Optimistic Concurrency Controls and the readers that block writers

For a long time now Oracle enthusiasts have bashed SQL Server for not having Optimistic Concurrency Controls.  They would demonstrate a SQL Server reader blocking a writer and possibly a writer blocking a reader.  Thus, having appeared to demonstrate that SQL can't handle these two operations concurrently, they would therefore conclude that SQL can't scale - certainly not to enterprise level workloads; workloads that involve lots of people writing to a database at the same time as lots of people wanting to read from it.

 

This, of course, is a rather dirty trick; you only have to look at https://www.microsoft.com/sql/evaluation/compare/benchmarks.mspx to see SQL Server 2000 performance against various typical 3rd party LOB application workloads that involve reading and writing.   So how did/do they perform these mischievous tricks - here's how:

 

Open connection 1

 

USE AdventureWorks

GO

BEGIN TRANSACTION

            UPDATE Production.Product

            SET StandardCost = 5.0,

                        ListPrice = 11.50

            WHERE Name LIKE '%sock%'

 

Open connection 2

 

USE AdventureWorks

GO

            SELECT  Name,

                        StandardCost,

                        ListPrice

            FROM Production.Product

            WHERE Name LIKE '%Sock%'

 

You will now see that connection 2 will not return anything - instead it just sits there doing nothing.  Here the writer on connection 1 blocks the reader on connection 2.  This blocking will continue until connection 1 either commits or rollbacks the transaction it started with BEGIN TRANSACTION.

 

Well it would be very easy, at this stage, to get into a religious slanging match - and you know I don't do religion very easily.  Luckily, with SQL 2005, I don't even have to think about get religious about this situation, because SQL Server 2005 now supports the necessary isolation levels to stop this happening.

 

At this point I'm reminded of an old joke told by Tommy Cooper:  Bloke goes to see a Doctor, he lifts his arm in the air and says, "Doctor, every time I do this it hurts."  And the Doctor says, "Well don't do it then."

 

I'm reminded of this joke because if you begin a transaction, update a value and then you find it causes other people problems - I would suggest you don't do it.  Why begin a transaction, do some work and then walk off without committing it?

 

Okay - so you might have a 'really big update' that takes ages to run and it can't be batched up into smaller transactions and you need to run some very long running reports that have to have a transactionally consistant view of the data.  Indeed; you might also be very unlucky and get struck by lightening!

 

In SQL 2000 there are various ways to obviate the encumbrance of this 'really big update' and nasty report – but I don't want to go down that road, like I said I want to avoid a religious debate.  Now, in SQL 2005, we have exactly the same capability as Oracle to make such an operation very easy for the developer.  Like Oracle, this capability employs the concept of 'versioning', which means each connection in the above scenario will work with its own version of the data, thus avoiding contention. 

 

However as with Oracle, versioning takes up both processing and i/o at the server, resources that are potentially very expensive and not to be wasted if you don't have to.  Does Oracle do any benchmarks with this feature turned on?  I don't think so, and nor would SQL for that matter.

 

I will argue that versioning is frequently used as a way to make developers jobs easier and not because it’s actually necessary.  By that I also mean it makes it easier for developers to write sloppy code - and by that you can happily infer I believe it makes it easier to employ cheaper developers.

 

So how do we go about employing these cheaper developers, sorry, reducing contention on the database?  To do this we now have two new isolation levels:

 

1. SNAPSHOT ISOLATION (Transaction-level Snapshot)

 

Administrators must set the new ALLOW_SNAPSHOT_ISOLATION  database option to allow Snapshot Isolation.

 

In order to start a transaction that uses Snapshot Isolation a developer must SET TRANSACTION ISOLATION LEVEL SNAPSHOT

 

RESULT: Every statement within a Snapshot Isolation Transaction sees the same version of data comprised only of committed changes which occurred before the start             

of the transaction.  Other statements inside other transactions do not see the changes made inside this Snapshot Isolation Transaction.

 

2. READ COMMITTED WITH SNAPSHOT ISOLATION (Statement-level Snapshot)

 

Administrators must set the new READ_COMMITTED_SNAPSHOT database option to allow Read Committed Snapshot Isolation

 

No further application level changes are required to have statements use Read-Committed Isolation.

 

RESULT: Each statement sees a version of the data that was committed just before the statement began, instead of when the resource is read.  This is merely a new implementation of read committed that is non-locking and non-blocking; the data is accurate only as at the start of the statement.