Freigeben über


CHECKDB (Part 1): How does CHECKDB get a consistent view of the database?

As you can guess from the title, I'm planning a long series over the summer to go into the guts of how CHECKDB works (both the consistency checks part and the repair part). And as you can guess from 'CHECKDB', I'm already bored with putting DBCC in front of it all the time and changing the font to Courier New to make it stand out. I don't do that in real life so why on the blog?

I have no idea how long it'll be - easily more than 20, probably less than 50, but there's a ton of info in my head that's desperate to get out (ever seen the scene in the movie Scanners where that guy's head explodes? Well, its not quite that bad but remembering that scene was fun). As part of this I'll need to go into some of the low-level structural details of the database, which will hopefully be interesting too. (How do I know all this stuff? Read my bio). If there's something I need to explain, post a comment with a question in and I'll do a post to answer it. I may even do a post on how to use DBCC PAGE...

So here's the problem statement: CHECKDB needs a consistent view of the database.

Why? Well, usually its running on a live database with all kinds of stuff going on. It needs to read and analyze the whole database but it can't do it instantaneously (this isn't Star Trek) so it has to take steps to ensure that what it reads is transactionally consistent.

Here's an example. Consider a transaction to insert a record into a table that is a heap and has a non-clustered index, with a concurrent CHECKDB that doesn't enforce a consistent view.

  • The table record is inserted first, and then the non-clustered index record is inserted (that's just the way the operations are split up in the database engine)
  • Because this hypothetical CHECKDB doesn't have a consistent view, it could read see the record in the table but not that in the index, conclude that the non-clustered index is out of sync with the table and flag an 8951 error (missing index row).
  • How could this happen? Depending on the order in which the pages are read, the page on which the new non-clustered index record should go could be read before the page on which the new heap record should go. (I use record and row somewhat interchangeably to mean the physically-stored contents of a table or index row). If the index page read happens just before the record is inserted into the table page, and then the table page is read, then we see the inconsistent state.

The easy way to get the consistent state is through locking, which is what SQL Server 7.0 did. You can still do that in SQL Server 2000 and 2005 using the TABLOCK option. Another way to do it is to put the database into single-user or read-only mode.

However, excessive locking is a drag and taking the database essentially offline tends to irritate users so with SQL Server 2000 we came up with a neat way to get the consistent view and be able to run CHECKDB online - log analysis. In a nutshell, after we've read through all the database, we read the transaction log to make sure we didn't miss anything. Sounds simple, right? Dream on. Here's how it works:

  • The log is read from the LSN of the 'begin tran' log record of the oldest transaction that is active at the time the database scan started, to the LSN at the time the database scan stops.
  • Log records from transactions that commit during that time are used to generate REDO facts. (We have a chicken-and-egg situation here - it's difficult to explain how CHECKDB works without referencing some mechanisms that haven't been explained yet - I'll get to what facts are in part <single-digits-I-promise>.) The REDO facts either reinforce something we've already seen (in which case we ignore them) or provide information on something we haven't seen. For example:
    • a page allocation log record would produce a REDO fact of 'page X is allocated to IAM chain Y' (yes, I'm throwing around unexplained terms again - unavoidable I'm afraid and I'll explain them later - or read Kalen's books)
    • a row insertion record (such as from the index example above) would produce a REDO fact of 'a row with these index keys was inserted into page A of table B, index C at slot position S'
  • Log records from transactions that rollback or don't commit during that time are used to generate UNDO facts. The UNDO facts either cancel something that we've already seen (e.g. the first half of the index example above, if it didn't commit while CHECKDB was doing the database scan) or reference something we haven't seen (in which case we ignore them). For example:
    •  page allocation log record would produce an UNDO fact of 'page X was deallocated from IAM chain Y'
    • a row insert record would produce an UNDO fact of 'a row with these index keys was removed from page A of table B, index C at slot position S'
  • As you may have realized, what we're essentially doing is our own log recovery, inside CHECKDB, but without actually affecting the database.
  • This can get excruciatingly complicated (e.g. having to generate UNDO facts from the compensation log records that wrap sections of a cancelled index rebuild transaction...) I spent too many days of 2000 working out what was going on in the log and making tweaks to this code. However, it worked really well and we had online CHECKDB finally. The kudos for writing most of this stuff goes to Steve Lindell - while he was busy writing the online code I was up to my eyes writing DBCC INDEXDEFRAG (another post).

Back in late 2000, it became apparent that with all the new features we were planning for 'Yukon' (we had no idea it would be called SQL Server 2005 back then), including some changes to the transaction log to allow for fast recovery and deferred transactions and stuff like versioning and online index build, the transaction log analysis was a non-starter. While it had given us the holy-grail of online consistency checks, with all the added complications of Yukon it would become impossible to maintain and get right.

But what to use instead? Who would come to my rescue? Turns out that database snapshots would be my savior. (That is their eventual name. I preferred their first name COW databases - Copy-On-Write databases - and my TechEd slide deck has a nice animated cow in homage). The in-depth details of database snapshots are beyond the scope of this post. To put it simply:

  •  they use NTFS sparse-file technology
  • database recovery is run when the snapshot is created, but the recovered database is stored in the snapshot, not the source database
  • they only hold pages from the source database that have been changed since the database snapshot was created (either by the recovery process, or as part of normal operations on the source database)
  • Books Online has a bunch more info about their use by DBCC - look in the 'DBCC Statements' section.

By moving to database snapshots we changed to using mainline server code to get our transactionally consistent view. This vastly reduced the complexity of the code and meant that someone else was responsible for finding and fixing its bugs during development :-)

So when CHECKDB starts, the first thing we do is work out whether we'd like to run online - if so we create a hidden database snapshot of the source database (i.e. CHECKDB's target database). That could cause you a problem - depending on your transaction load concurrent with the CHECKDB, the database snapshot can grow in size. As the we create a hidden one, you have no control over where we places the files - we just place them as alternate streams of the files comprising the source database. If you don't have room for this, just create your own database snapshot and check that.

Once the database snapshot is created, we're guaranteed a transactionally consistent view of the database and can merrily run our check algorithms against the database snapshot. Ah, you might say, but that means CHECKDB is checking the database as it was at some point in the past! Yes, I'd say, but that point is the start time of the CHECKDB, just as it was (essentially) with the log analysis mechanism in SQL Server 2000.

There are a few slight gotchas (all documented) with this approach:

  • sparse files are only available with NTFS so online checks can't be run on databases stored on FAT or FAT32 volumes
  • recovery cannot be run on TEMPDB, so online checks can't be run on TEMPDB (CHECKDB automatically switches to locking in that case)

And that's that. Now its time for breakfast - no oatmeal hopefully...

Comments

  • Anonymous
    June 09, 2006
    Great start. I look forward to the entire series, but don't fall into blapathy ;-). It's easy to start something and then forget about it. I mean do whatever you want, but we'll keep prodding you for more. No matter what you are talking about/learning the biggest problem is (for me) that there is always a level deeper, and It's easy to get caught up on going down the stack to the deepest levels and forget where you were in the original calling though. That is fun though because we get to know stuff like this. I like it.. But I'm a reductionist by nature and will probably now need to deeper in some of the items you discuss in the series, but that's alright it's all part of learning. Once I have it all down to the quarks anti quarks and leptons I'll be good.

  • Anonymous
    June 12, 2006
    TechEd is bringing some great things out of the Microsoft closet, Data Dude and Sql Everywhere being...

  • Anonymous
    June 12, 2006
    TechEd is bringing some great things out of the Microsoft closet, Data Dude and Sql Everywhere being...

  • Anonymous
    June 13, 2006
    It appears that DBCC CHECKDB can cause performance degradation immidiately after it finishes. KB article...

  • Anonymous
    June 18, 2006
    Hmmm - I sat for 5 minutes thinking of something amusing to say to start this one off and nothing came...

  • Anonymous
    October 30, 2006
    PingBack from http://www.julian-kuiters.id.au/article.php/sqlserver2005-inside-dbcc

  • Anonymous
    January 24, 2007
    This is a question I see every so often and it cropped up again this morning so I'll use it as the subject

  • Anonymous
    August 13, 2007
    Long ago, not long ago.... No doubt that many of you might have gone through the error above within your

  • Anonymous
    May 14, 2009
    In this post, we would have a look at the differences between DBCC CHECKTABLE and DBCC CHECKDB. We would

  • Anonymous
    June 02, 2009
    PingBack from http://outdoorceilingfansite.info/story.php?id=59187

  • Anonymous
    June 07, 2009
    PingBack from http://greenteafatburner.info/story.php?id=2703

  • Anonymous
    June 09, 2009
    PingBack from http://quickdietsite.info/story.php?id=12604

  • Anonymous
    June 18, 2009
    PingBack from http://patiosetsite.info/story.php?id=632