THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Performance Impact: Frequent Manual Checkpoints

Recently, I was asked why the following script took a very long time to insert 24GB of data into a single table in SQL Server 2005 (the database was in the simple recovery mode):

-- Name: Script 1

DECLARE @i int
SET @i = 1
WHILE @i <= 3145728  -- (1024*1024*3)
    INSERT tbTest(i, filler)
    VALUES (@i, ' ')

    IF (@i % 16 = 0)

    SET @i = @i + 1

The table used in the script (referred to as Script 1 in the rest of this post) is defined as follows:

    i      int NOT NULL, 
    filler char(8000) NOT NULL

My response was to comment out the CHECKPOINT statement in Script 1, i.e. to rely on automatic checkpoints instead. The result was a dramatic reduction in the script run time--from 211 minutes (3.5 hours) with CHECKPOINT to 14 minutes without CHECKPOINT--to insert the same 24GB of data (the recovery interval was the default value of 0). The following chart illustrates the impact:

Why such a huge difference in performance whether or not to checkpoint after inserting every 16 pages? Among other things, one significant factor is that issuing checkpoints at a high frequency with a small number of dirty pages doesn't give SQL Server a sufficient opportunity to perform large disk writes. Disk writes in small I/O sizes cannot take full advantage of the throughput potential that the storage on which the data files reside may offer. For the details of how SQL Server flushes its data pages, I highly recommend reading Bob Dorr "SQL Server 2000 I/O Basics".

So, be careful out there with frequent manual checkpoints.

Note that relying on automatic checkpoints is not the only way to speed up the script. Actually, it's not even the most optimal way. I'll look at this issue in more detail in two follow-up posts here.

Published Monday, August 13, 2007 1:21 AM by Linchi Shea

Attachment(s): checkpoints.gif



Tony Rogerson said:

Same is true for the transaction log - try wrapping the INSERT's so that 8 of them i.e. 8 x 8KB = 64KBytes of data (the maximum write size to the log) in a transaction and you should see a big speed up, especially if it's on a SAN.

Good post Linchi!


August 13, 2007 8:56 AM

Linchi Shea said:


In the script, INSERT's are wrapped in side a transaction. There are 16 INSERTs a commit in this case.

Man, what can I do to sway your opinion on SANs? :-)

August 13, 2007 11:08 AM

Tony Rogerson said:

:) - Need to put my glasses on (rather, get my eyes tested) - sorry Linchi.

SAN's :) - put me in front of one that can do more than 20Mbytes per second for data chunks of 16KBytes - 64KBytes that is write thru cache (no outstanding writes), basically doing what SQL Server does.

August 14, 2007 3:45 AM

Linchi Shea said:


This 'no-outstanding-writes' requirement is unusual. I should blog about it some other day lest we get into a long drawn-out exchange of comments here.

August 14, 2007 11:30 AM

Linchi Shea said:

In my previous blog post on the performance impact of frequent manual checkpoints , I highlighted the

August 17, 2007 11:00 PM

Linchi Shea said:

In my two previous posts on the performance impact of frequent manual checkpoints and the I/O behavior

August 20, 2007 2:36 PM

Linchi Shea said:

In my previous posts ( 1 , 2 , 3 ), I focused on the performance behavior of setting the checkpoints

August 23, 2007 3:47 PM
New Comments to this post are disabled

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog


Privacy Statement