THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Tibor Karaszi

Bulk logged recovery model and log backups

This post is inspired from a discussion about bulk logged recovery model and log backup. Some basics:

In bulk logged recovery model, some operations can be minimally logged, like:

  • SELECT INTO
  • Index create, rebuild and drop
  • Bulk loading of data

The interesting thing is that you can do a log backup after such a minimally logged operation. This log backup will contain both log records and also the data and index pages affected by the minimally logged operation.

You cannot do a point in time restore using STOPAT or similar for such a log backup (a log backup containing both log recorda and data pages), for obvious reasons. You can do a point in time restore of subsequent log backups, however, assuming that log backup doesn't contain data pages (no bulk logged operation was done during the time span that the log backup cover).

You cannot do a log backup while in bulk logged mode if a bulk logged operation has occured and if the data files are lost. This is also pretty obvious since SQL Server cannot go and grab the data pages is needs - the data files are lost!

But what about if we have several file groups, we have bulk logged recovery model and performed a bulk logged operation; but the bulk logged operation doesn't affect the data file which is lost? Theoretically, SQL Server could now allow such a log backup. So, lets give it a try:

In below script, we have two file groups: PRIMARY and SEC. We do a bulk logged operation which affects only PRIMARY, and then delete the file for the scondary file group. Then try a log backup. This fails, so the answer is that all data files need to be accessible in order to perform a log backup (in bulk logged recovery model and after a minimally logged operation). The reason becomes more obvious if we look at the message from such a successful log backup first:

Processed 72 pages for database 'BulkBackup', file 'BulkBackup' on file 1.
Processed 8 pages for database 'BulkBackup', file 'BulkBackup2' on file 1.
Processed 58 pages for database 'BulkBackup', file 'BulkBackup_log' on file 1.
BACKUP LOG successfully processed 138 pages in 0.098 seconds (11.530 MB/sec).

The file BulkBackup2 is a file which is unaffcted by the bulk logged operation. But still SQL Server goes and grab 8 pages from that file. The number "8" makes me suspect that this is the file header (first 8 pages on each file), needed for some reason. (I get the same error with both NO_TRUNCATE and CONTINUE_AFTER_ERROR.)

The findings are also implied by Books Online. Here's a quote:

"If the database is damaged, for example, if the database does not start, a tail-log backup succeeds only if the log files are undamaged, the database is in a state that supports tail-log backups, and the database does not contain any bulk-logged changes."

Above do not talk about individual files, it addresses the whole database.

Finally, here's the script. Read it so you understand it before you execute it. It will drop a database named BulkBackup, if such exists. If you want, you can change recovery to FULL if you want to verify that log backup is indeed possible for a damaged database (in general terms...).

USE master
IF DB_ID('BulkBackup') IS NOT NULL DROP DATABASE BulkBackup
GO
CREATE DATABASE BulkBackup
ON
PRIMARY
(NAME = N'BulkBackup', FILENAME = N'C:\BulkBackup.mdf' , SIZE = 20MB , FILEGROWTH = 10MB),
FILEGROUP SEC
(NAME = N'BulkBackup2', FILENAME = N'C:\BulkBackup2.ndf' , SIZE = 20MB , FILEGROWTH = 10MB)
LOG ON
(NAME = N'BulkBackup_log', FILENAME = N'C:\BulkBackup_log.ldf' , SIZE = 20MB , FILEGROWTH = 10MB)
GO
ALTER DATABASE BulkBackup SET RECOVERY BULK_LOGGED
ALTER DATABASE BulkBackup SET AUTO_CLOSE ON

USE BulkBackup
CREATE TABLE t(c1 int identity, c2 char(5)) ON SEC
INSERT INTO t SELECT TOP 10000 'hello' FROM syscolumns a CROSS JOIN syscolumns b
CREATE TABLE tSec(c1 int identity, c2 char(5)) ON "PRIMARY"
INSERT INTO tSec SELECT TOP 10000 'hello' FROM syscolumns a CROSS JOIN syscolumns b
BACKUP DATABASE BulkBackup TO DISK = 'nul'

--Bulk logged operation
SELECT * INTO tSec2 FROM tSec

--Verify log backup possible
--after bulk logged operation
BACKUP LOG BulkBackup TO DISK = 'nul'

--Bulk logged operation again
SELECT * INTO tSec3 FROM tSec

--Crash secondary database file
USE master
GO
WAITFOR DELAY '00:00:05'
EXEC sp_configure 'xp_cmdshell', 1
RECONFIGURE
EXEC xp_cmdshell 'DEL C:\BulkBackup2.ndf'
EXEC sp_configure 'xp_cmdshell', 0
RECONFIGURE

--Can we now do a log backup?
BACKUP LOG BulkBackup TO DISK = 'nul' WITH NO_TRUNCATE
--Apparently not. Error messages is:

--BACKUP LOG is not possible because bulk logged changes exist in
--the database and one or more filegroups are unavailable.

--Cleanup:
GO
IF DB_ID('BulkBackup') IS NOT NULL DROP DATABASE BulkBackup

 

Published Sunday, August 19, 2007 11:32 AM by TiborKaraszi
Filed under:

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

BlackHawk said:

Tibor,

Is there any reason to use anything besides Bulk Logged if performing Log Shipping?

I would assume transaction log backups were being performed at a reasonable rate.

July 10, 2009 3:29 PM
 

TiborKaraszi said:

<<Is there any reason to use anything besides Bulk Logged if performing Log Shipping? >>

I prefer to only use bulk_logged when neccesary. IF you have the db in bulk_logged all the time, then any old SELECT INTO, small index creation, data load can result in you lose ability to:

1) restore the following log backup to point in time.

2) produce the following log backup in case data file is corrupt somehow.

I prefer to stay on top of above points and only switch down to bulk_logged when I really want this.

July 11, 2009 4:49 AM
 

Blackhawk said:

Thank you.

Clearest answer I have had to this question so far.

The index creation component really brought the whole thing to light.

Regards;

BlackHawk

July 13, 2009 9:06 AM

Leave a Comment

(required) 
(required) 
Submit
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement