THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Peter Larsson

Thinking outside the box

Lesson learned by Trial and Error

Yesterday, I decided to install Microsoft SQL Server 2008 R2 (November CTP) on my local machine. I already had SQL Server Express 2008 installed so the first step was to uninstall Express, since there seemed to be no upgrade path.
Uninstalling went great.
I then installed 2008 R2 and rebooted my machine. I attached my sample databases, and my test database. To my surprise, the Test database was now corrupted!

File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test.ldf"
may be incorrect. The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint
occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost
due to a hardware or environment failure.

I tried sp_attach_single_file_db to no vail. I tried all methods known by either Google or some of the MVP's I emailed for help. All of them said the only way was to restore the database. It was no big deal, because it is in fact a test database.

Well, I had some time left so I decided to go Trial and Error!

Since I couldn't attach the database file at all, the first step had to be to create a new database named Test with same size. I decided to create the logfile at the same location as the original file. The datafile I placed at the new location.

CREATE DATABASE Test
ON PRIMARY
( NAME = Test,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Test.mdf',
    SIZE = 1024,
    MAXSIZE = 2048,
    FILEGROWTH = 15% )
LOG ON
( NAME = Test_log,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_log.ldf',
    SIZE = 128MB,
    MAXSIZE = 2048MB,
    FILEGROWTH = 5MB ) ;
GO

I thought it didn't matter what size the log file was, so I just used a small number, however I thought it was important to have the datafile at the same size as the corrupt one.
I then stopped SQL Server service, replaced the empty datafile with the corrupt datafile and started SQL Server service again. The database Test now was visible in the databases, but of course it was not accesible. A number of different error messages was returned to be depending on how I tried to access the database.

Ad-hoc updates to system tables are not allowed with SQL Server 20008 and later (even if using sp_configure) so I issued

ALTER DATABASE Test SET EMERGENCY

This is the same as the previous "UPDATE SET = -32768" trick.
I then decided to set the database in single user mode with

ALTER
DATABASE Test SET SINGLE_USER

Now the work by checking the database was going to happen! Since it is a test database, I didn't care if there was dataloss, so I went for

DBCC CHECKDB (Test, REPAIR_ALLOW_DATA_LOSS)

It took some 2 minutes to complete, and there were errors and warnings. I rerun the CHECKDB command again, and this time where were only informational messages.
After that, I set the database in multi user mode and put the database online again with

ALTER DATABASE Test SET MULTI_USER
ALTER DATABASE Test SET ONLINE

And now I have access to my database again. It also seem there was no dataloss at all.

Published Wednesday, November 18, 2009 11:44 AM by Peso

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

David said:

And you know there was no data loss how?

November 18, 2009 11:52 AM
 

Peso said:

Yes. The database has a few SP and functions and two tables with 2 million records each.

They both still have same checksum value.

November 18, 2009 3:21 PM
 

TiborKaraszi said:

Hi Peso!

Just to make it perfectly clear for the benefit of other readers. SQL Server insisteed on an ldf file for good reasons. The database wasn't cleanly shutdown and reocvery work was needed to make i consistent again. Basically the work performed allowed you access to whatever inconsistent stuff is in there. In some cases it might be fine (export of a couple of procedures and some data). But for production situation, this can be really really bad (potentially both logically and physically incinsistent data). I.e., a restore is in most cases preferred (unless we can visually validate the data in the database and export it to a healthy database).

November 21, 2009 1:58 PM
 

Peso said:

I agree with the MVP's I emailed with, and with Tibor. It this was a production database, I would only use this method to export out some vital stuff. This was however a test database, and it was a learning experience.

The real question is why an uninstill and an install could crash the database...

November 21, 2009 2:43 PM

Leave a Comment

(required) 
(required) 
Submit
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement