Share via


SQL Read Committed Snapshot Isolation and Dynamics CRM

Hello everyone,

Today, I would like to talk about something which has been making a lot of buzz lately. Just yesterday, I was reading some interesting stuff around that. Let’s begin with Concurrency Effects in SQL. Let’s say that there are a lot of users accessing the same set of data on a regular basis, and, if there is no way of maintaining concurrency, then users could face consequences like:

Lost updates – occurswhen two or more transactions select the same row and then updates the row on the basis of the value originally selected. Each transaction is unaware of the other transactions.

Uncommitted dependency (dirty read) – occurs when a second transaction selects a row that is being updated by another transaction. The second transaction is reading data that has not been committed yet and may be changed by the transaction updating the row.

Inconsistent analysis (non repeatable read) - occurs when a second transaction accesses the same row several times and reads different data each time.

Phantom reads – occurs when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction.

Missing and double reads – occurs when missing a updated row or seeing an updated row multiple times and missing one or more rows that were not the target of update.

Extract from: https://msdn.microsoft.com/en-us/library/ms190805.aspx 

READ COMMITTED snapshot isolation can be used to minimize potential locking contention while protecting transactions from “dirty” reads of uncommitted data modifications. With dirty reads, data within the current transaction can potentially be modified by other transactions between individual statements, resulting in non-repeatable reads or phantom data. In simple terms I would say READ COMMITTED snapshot isolation specifies that statements cannot read data that has been modified but not committed by other transactions.

These days production databases are quickly growing in size, and data-retention periods are increasing with changing business and regulatory requirements. Additionally, with drive capacity doubling and storage costs falling, the amount of data users want to keep "online" is increasing. In this scenario we can think of separating transactional data with analysis but that would be little expensive and leaves a question there, should we…? As we all know, SQL puts a lock on every statement until explicitly mentioned, under read committed transaction isolation, which means an active Select statement releases read locks after a resource is read.

Let me also share some information on isolation levels available in SQL, in this case SQL 2012:

 

Isolation level Best suited for an application when:
Read uncommitted The application does not require absolute accuracy of data (and could get a larger or smaller number than the final value) and wants performance of OLTP operations above all else. There is no version store to maintain, and no locks are acquired or honored when reading data. Data accuracy of queries in this isolation may see uncommitted changes.
Read committed The application does not require point-in-time consistency for long-running aggregations or long-running queries, yet wants data values that are read to be only transactionally consistent. The application does not want the overhead of the version store when reading data with the trade-off of potential incorrectness for long-running queries because of non-repeatable reads. This isolation level is ideally suited to transactions that rely upon the blocking behavior of locks to implement queuing applications or other ordered access to data.
Read committed Using Row Versioning The application requires absolute point-in-time consistency for long-running aggregations or queries. All data values must be transactionally consistent at the point in time where the query begins. The database administrator chooses the overhead of the version store for the benefit of increased throughput because of reduced lock contention. Additionally, the application wants transactional consistency for large queries, not transactions.
Repeatable read The application requires absolute accuracy for long-running multi-statement transactions and must hold all requested data from other modifications until the transaction completes. The application requires consistency for all data that is read repeatedly within this transaction and requires that no other modifications be allowed. This can affect concurrency in a multiuser system if other transactions are trying to update data that has been locked by the reader. This is best when the application is relying on consistent data and plans to modify it later within the same transaction.
Snapshot The application requires absolute accuracy for long-running multi-statement transactions but does not plan to modify the data. The application requires consistency for all data that is read repeatedly within this transaction but plans to only read data. Shared locks are not necessary to prevent modifications by other transactions because the changes will not be seen until after the data modification transactions commit or roll back, and the snapshot transaction finishes. Data can be modified within this transaction level at the risk of conflicts with transactions that have updated the same data after the snapshot transaction started. This conflict must be handled by each updating transaction. A system that has multiple readers and a single writer (such as the replicated reporting system in the scenario section earlier in this paper) will not encounter conflicts.
Serializable The application requires absolute accuracy for long-running multi-statement transactions and must hold all requested data from other modifications until the transaction finishes. Additionally, the transactions are requesting sets of data and not just singleton rows. Each set must produce the same output at each request within the transaction, and other users cannot modify the data that has been read and cannot insert new rows into the set. This is best when the application is relying on consistent data, plans to modify it later within the same transaction, and requires absolute accuracy and data consistency up to the end of the transaction.

Extract from: https://msdn.microsoft.com/en-us/library/ms345124.aspx#sql2k5snapshotisol_topic05

In nutshell, if I may put it across to you, I would say that please analyze the scenario & work with your DBA to plan it further. Also, please keep in mind that RCSI uses TempDB and enabling it could hinder the performance of your environment if your TempDB configuration is not optimal. For TempDB, ensure you have enough disk space, disk I/O, correct number of TempDB files and that the TempDB files are on a proper physical storage configuration (such as SSD's and separate from the CRM DB Files) will help reduce risk associated with TempDB performance.

Hope I was able to explain well.

Stay tuned and enjoy reading!!

For more understanding you can also refer:

Row Versioning-based Isolation Levels in the Database Engine: https://technet.microsoft.com/en-us/library/ms177404(v=sql.105).aspx

Understanding Row Versioning-Based Isolation Levels: https://technet.microsoft.com/en-us/library/ms189050(v=sql.105).aspx

Row Versioning-based Isolation Level Example: https://technet.microsoft.com/en-us/library/ms191190(v=sql.105).aspx

Enabling Row Versioning-Based Isolation Levels: https://technet.microsoft.com/en-us/library/ms175095(v=sql.105).aspx