Transaction Log size does not match the size of the data being loaded.
I was working with one of our ISV partners recently and they mentioned some very unusual SQL Server transaction log behavior. The transaction log size did not correspond to the actual size of the data being loaded into the database.
The ISV created an empty database, setup a backup device, set the database recovery model to FULL, and then started loading data. They also intended to backup the transaction log occasionally to keep the log a manageable size. However, at the end of the load process although the database contained 92 GB of data, the log was only 1.2 GB. One would have expected that the transaction log in the case of FULL recovery would have been approximately the same size as the data.
We double checked the recovery model by issuing the following query (which confirmed the database was set to FULL):
select * from sys.databases
We even tried changing the Database to ‘Bulk-Logged’, and then retried with ‘Full Recovery’, and still got the same results! Can you guess why this was happening?
If you guessed that the database never had an initial FULL backup, you would be correct.
The answer is in the SQL Server Books online topic ‘Working with Transaction Log Backups’: https://msdn.microsoft.com/en-us/library/ms190440.aspx ;
‘ Before you can create the first log backup, you must create a full backup, such as a database backup or the first in a set of file backups. Restoring a database by using only file backups can become complex. Therefore, we recommend that you start with a full database backup when you can. Thereafter, backing up the transaction log regularly is necessary. This not only minimizes work-loss exposure but also enables truncation of the transaction log. Typically, the transaction log is truncated after every conventional log backup. ‘
Because there was no initial FULL database backup, the database ‘effectively’ remained in SIMPLE recovery mode; SQL Server was truncating the transaction log automatically.
So, the moral of the story is: if you expect to be able to recover your database transactions, you must create a full database backup first, this true for FULL and BULK-LOGGED recovery models. This backup can be created right after the actual database is created; in this particular case the backup would be ‘empty’ because the database contains no data. However, this process creates a ‘marker’ that allows the transaction to be written to the transaction log.
Comments
Anonymous
November 07, 2010
It's not just the trn log size that should be the worry, but the ability to RESTORE! It would make sense that you have to start the backup chain with a FULL BKP, otherwise why bother doing a trn log backup? You'd have nothing to roll into!Anonymous
November 07, 2010
Greetings, I am implementing search application with .net 2005 and sql server 2005. I would like to know how the Null value impact the performance. In my table, there is a column called SearchKey varchar(100), default null. The non-clustered index is created on the SearchKey column. About 1/3 of date in this column has null value and rest of 2/3 data are digital numbers. My question is if the null values make the non-clustered index useless. As a simplified version, my select statement is smiler as following select ..... from <mytable> where SearchKey like '1234%' I know that I could have another choice to change those Null value in the column to a empty string or some characters. But I would like to here from you, exporters, if the Null value really affect the non-clustered index. Thanks in advance for your time and I am looking forward to your opinion.