次の方法で共有


Transaction Log Backup Size in Bulk Logged Recovery Model

Assume that I have a database in Bulk Logged Recovery Model, and I perform a bulk operation that is minimally logged. Now, if I take a Log Backup, should not my Log Backup File Size be smaller than if I had performed the same operation in Full Recovery Model?

I have seen many people confused by this question. When I say, “No, please do not expect this”, I am thrown back another question – what, then, is the purpose of having the Bulk Logged Recovery Model?

Let’s first have a look at an extract from the Books Online (https://msdn.microsoft.com/en-us/library/ms189275(v=SQL.90).aspx). Note the parts marked in Bold Red below:

This recovery model bulk logs most bulk operations. It is intended solely as an adjunct to the full recovery model. For certain large-scale bulk operations such as bulk import or index creation, switching temporarily to the bulk-logged recovery model increases performance and reduces log space consumption. Log backups are still required. Like the full recovery model, the bulk-logged recovery model retains transaction log records until after they are backed up. The trade-offs are bigger log backups and increased work-loss exposure because the bulk-logged recovery model does not support point-in-time recovery.

Hence, in short, the sole purpose of the Bulk Logged Recovery Model is to reduce Log Space Consumption by minimally logging some bulk activities, there by improving the overall performance of the bulk operations. However, this recovery model will not result in smaller log backups (smaller than in Full Recovery Model).

Before going into further details, let’s do small hands-on. We will create two databases, change their Recovery Models and create two tables in each database:

CREATE DATABASE FullRecoveryTest
GO
CREATE DATABASE BulkLoggedRecoveryTest
GO

ALTER DATABASE FullRecoveryTest SET RECOVERY FULL
GO
ALTER DATABASE BulkLoggedRecoveryTest SET RECOVERY BULK_LOGGED
GO

USE FullRecoveryTest
GO
CREATE TABLE Tab1 (Col1 INT NOT NULL, Col2 CHAR(5) NOT NULL, Col3 INT NOT NULL, Col4 CHAR(7) NOT NULL)
GO
CREATE TABLE Tab2 (Col1 INT NOT NULL, Col2 BIGINT NOT NULL, Col3 CHAR(200) NOT NULL, Col4 CHAR(1000) NOT NULL, Col5 CHAR(4000) NOT NULL)
GO

USE BulkLoggedRecoveryTest
GO
CREATE TABLE Tab1 (Col1 INT NOT NULL, Col2 CHAR(5) NOT NULL, Col3 INT NOT NULL, Col4 CHAR(7) NOT NULL)
GO
CREATE TABLE Tab2 (Col1 INT NOT NULL, Col2 BIGINT NOT NULL, Col3 CHAR(200) NOT NULL, Col4 CHAR(1000) NOT NULL, Col5 CHAR(4000) NOT NULL)
GO

Now, we will take a Full Backup of both the databases and then a Log Backup each. After the Log Backup, we will examine the Log File Size and the Log Space Used for both the databases:

USE master
GO
BACKUP DATABASE FullRecoveryTest TO DISK = 'C:\FullRecoveryTest_Full.bak'
GO
BACKUP LOG FullRecoveryTest TO DISK = 'C:\FullRecoveryTest_Log_Before.trn'
GO
BACKUP DATABASE BulkLoggedRecoveryTest TO DISK = 'C:\BulkLoggedRecoveryTest_Full.bak'
GO
BACKUP LOG BulkLoggedRecoveryTest TO DISK = 'C:\BulkLoggedRecoveryTest_Log_Before.trn'
GO

Here is the result that we get:

Database Name Log Size (MB) Log Space Used (%) Log Space Used (MB)
FullRecoveryTest 0.5546875 51.49648 0.2856445375
BulkLoggedRecoveryTest 0.5546875 51.84859 0.2875976476

Ok. Now, we will BCP in rows into the tables in both the databases. We will BCP in the same data set in both the databases, so that after the BCP in, both the databases have identical data. I used the attached files and the following commands:

bcp FullRecoveryTest.dbo.Tab1 in D:\Tab1.txt -SSQLServerName -T -c -h "TABLOCK"
bcp FullRecoveryTest.dbo.Tab2 in D:\Tab2.txt -SSQLServerName -T -c -h "TABLOCK"
bcp BulkLoggedRecoveryTest.dbo.Tab1 in D:\Tab1.txt -SSQLServerName -T -c -h "TABLOCK"
bcp BulkLoggedRecoveryTest.dbo.Tab2 in D:\Tab2.txt -SSQLServerName -T -c -h "TABLOCK"

BCP In File For Table1 : BCPInFileForTab1
BCP In File For Table2 : BCPInFileForTab2

Now, before we take another set of Log Backups, lets examine the Log Space Usage:

Database Name Log Size (MB) Log Space Used (%) Log Space Used (MB)
FullRecoveryTest 2.054688 88.26046 1.8134770803648
BulkLoggedRecoveryTest 0.5546875 72.53521 0.40234374296875

We can clearly see that under the Bulk Logged Recovery Model, only 0.11474609536875 MB of Log Space has been used for the entire operation, where as under Full Recovery Model, 1.5278325428648 MB of Log Space has been used (almost 13.5 times the Log Space in Bulk Logged Recovery Model).

Does this mean that the size of the Log Backup of the BulkLoggedRecoveryTest database will be smaller than that of the FullRecoveryTest database? Let’s see:

USE master
GO
BACKUP LOG FullRecoveryTest TO DISK = 'C:\FullRecoveryTest_Log_After.trn'
GO
BACKUP LOG BulkLoggedRecoveryTest TO DISK = 'C:\BulkLoggedRecoveryTest_Log_After.trn'
GO

FullRecoveryTest_Log_After.trn : 1,667,584 bytes
BulkLoggedRecoveryTest_Log_After.trn : 1,856,000 bytes

Strange, isn’t it? The answer is again in the Books Online (https://msdn.microsoft.com/en-us/library/ms190692(v=SQL.90).aspx):

Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large. Additionally, backing up the log requires access to the data files that contain the bulk-logged transactions. If any affected database file is inaccessible, the transaction log cannot be backed up and all operations committed in that log are lost.

To track the data pages, a log backup operation relies on a bulk-changes bitmap page that contains a bit for every extent. For each extent updated by a bulk-logged operation since the last log backup, the bit is set to 1 in the bitmap. The data extents are copied into the log followed by the log data. The following illustration shows how a log backup is constructed.

LogBackupBulkLogged

The proof of the above theory is in the output for the BACKUP LOG commands. For the FullRecoveryTest database, the output of the BACKUP LOG command is:

Processed 191 pages for database 'FullRecoveryTest', file 'FullRecoveryTest_log' on file 1.
BACKUP LOG successfully processed 191 pages in 0.941 seconds (1.579 MB/sec).

However, for the BulkLoggedRecoveryTest database, the output of the command is:

Processed 208 pages for database 'BulkLoggedRecoveryTest', file 'BulkLoggedRecoveryTest' on file 1.
Processed 15 pages for database 'BulkLoggedRecoveryTest', file 'BulkLoggedRecoveryTest_log' on file 1.
BACKUP LOG successfully processed 223 pages in 0.830 seconds (2.095 MB/sec).

Notice that 208 pages from the DataFile of the BulkLoggedRecoveryTest database has been backed up in the Transaction Log Backup.

Lesson learnt: We cannot expect the Transaction Log Backup Size to be smaller even if the Database is in Bulk Logged Recovery Model. This recovery model is for better Transaction Log Management, not for better Transaction Log Backup Management. Those who have Log Shipping configured, and hope to benefit from Bulk Logged Recovery model while performing maintenance tasks – this is not good news for you.

Disclaimer: All information provided here is my personal opinion and is neither verified nor approved by Microsoft before it is published. All information, and code samples, if any, is provided "AS IS" with no warranties and confers no rights.

Comments

  • Anonymous
    December 30, 2010
    Thanks Suhas writing this post, As always, explanation is so easily understandable....it's a my wish to see more such posts in 2010. Now, as for me...I plan to write a sequel on this post......to elaborate more on other common bulk ops like 'Alter Index/Re-Index'... "Happy New Year" VarunD

  • Anonymous
    October 03, 2011
    Hi Suhas, can you please be clear on the extents part  in the both the models. how bulk operations are minimally logged in bulk recovery model....

  • Anonymous
    February 16, 2013
    Thanks Suhas writing such a nice post

  • Anonymous
    February 20, 2016
    Wow, Beautiful post!!.. Really appreciate the clarity of the content.