THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Why does my tempdb log keep growing?

In an earlier post, I showed that you could run out the log space or force the log file to autogrow in a user database even if you are not running any large transaction, and the user database is in the simple recovery mode.

 

Whether or not you see that behavior depends on which one gets there first. If the log can be truncated first via a checkpoint that may be woken up when the threshold of a number of criteria is crossed--including the log being 70% full, your log may not grow or become full. However, if the small transactions are issued fast enough, they can fill up the log before the log can be truncated. It’s a kind of race condition.

 

A question was then raised on whether this is true with tempdb because tempdb does transaction logging quite differently than a user database.

 

The answer is that you may still see the race condition with tempdb.

 

I ran the same scripts used in the earlier post. Initially, I could not fill up the tempdb log with small transactions. Its log always got truncated before it became full. By adjusting the parameters in the script, I was able to fill up the tempdb log of 1GB in size with a series of individually committed 8000-byte updates. Changing the update size to 7000 bytes, all the transactions went through fine without blowing the log.

 

The script to create the test table is as follows:

 

set nocount on

go

drop table #test

go

create table #test(i int, c char(8000))

go

declare @i int

set @i = 1

while @i < 40000

begin

      insert #test values(@i, 'abc')

      set @i = @i + 1

end

 

The following script is run to see if the tempdb log is filled up or forced to autogrow. You may have to adjust the value for parameter @change_size several times before you can find the threshold, above which the script will fills up the tempdb log and below which the script will run to completion with no issue.

 

 

set nocount on

go

 

declare @change_size int

set @change_size = 7000  -- adjust this value

declare @i int

 

set @i = 1

 

while @i < 200

begin

     if @i % 2 = 0

       update #test set c = replicate('a', @change_size)

     else

       update #test set c = replicate('b', @change_size)

     select @i = @i + 1

end

 

Compared with a regular user database, you indeed have to try harder to fill up the transaction log of tempdb with small transactions.

 

Regardless whether it’s a user database or tempdb, when its transaction log is filled up or forced to autogrow, large long-running transactions should not be the only potential culprit for you to track down.

Published Thursday, May 21, 2009 1:39 AM by Linchi Shea
Filed under:

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Richb said:

An aside to this, I find once tempdb (at least on 2000) grows quite large, the automated checkpointing only seems to kick in at 80% full or so... so that 45gb log thats grown overnight for some reason, will end up using a lot of its space for no apparently good reason.  

A regular checkpointing seems to fix it... or is there a better solution?

May 21, 2009 6:24 AM

Leave a Comment

(required) 
(required) 
Submit

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement