THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Performance Impact: Frequent Manual Checkpoints and Their I/O Behavior

In my previous blog post on the performance impact of frequent manual checkpoints, I highlighted the performance peril of going overboard with manual checkpoints, and I suggested that a major contributing factor was the failure of frequent manual checkpoints to take advantage of the throughput potential of the underlying storage. But I didn't offer any proof for that statement.

In this post, I'll give you some empirical evidence in support of that statement.

I tested two configurations with respect to how often transactions were committed and checkpoints were issued in Script 1 listed in the previous post. I reproduced below the section of the script that controlled the frequency of committing transactions and issuing checkpoints for each of the two tested configurations:

Config 1: Committing a transaction and issuing a checkpoint once every 16 inserts

if @i % 16 = 0
    commit tran

Config 2: Committing a transaction once every 16 inserts and using automatic checkpoints

if @i % 16 = 0
    commit tran
--  checkpoint

Note that since a row in the test table is 8008 bytes in size, 16 inserts would add about 128KB of data to the table.

For each configuration, the following four I/O metrics were read off the Windows performance monitor:

  • Data write throughput -- MB per second for the data file,
  • Log write throughput  -- MB per second for the log file,
  • Data write block size  -- KB per write for the data file, and
  • Log write block size    -- KB per write for the log file

The following four charts show the above I/O metrics for each of the test configurations:

Fig. 1 shows that the test script was able to insert significantly more data per second with automatic checkpoints than with manual checkpoints once every 16 inserts, ~190MB/sec vs. ~5MB/sec. In fact, 190MB/sec was near the maximum throughput of the I/O path on which the data file was placed. Fig. 3 shows that automatic checkpoints were able to issue much larger I/Os--in this case ~260KB/write--than were frequent manual checkpoints (~24KB/write). What is not shown in the charts is that when configured to use automatic checkpoints, the test script issued very few checkpoints during the test run, whereas the test script issued a checkpoint almost every one or two seconds. In short, automatic checkpoints were able to write to the data file much more efficiently than were frequent manual checkpoints.

It is rather interesting to note in Fig.2 and Fig. 4 that checkpoint frequency had a huge impact on the performance of transaction logging, even though the same number of inserts, thus the same amount of transaction data, were committed in both test configurations. With automatic checkpoints, SQL Server was able to perform larger log writes--64KB/write as opposed to ~24KB/write in the case of frequent manual checkpoints. In addition, with automatic checkpoints, SQL Server was able to do more writes per second than with frequent manual checkpoints--~740 writes/sec vs. ~166 writes/sec.

So in a way, committing a transaction and issuing a checkpoint once every 16 inserts was dealt a double whammy--significantly poorer write performance to the data file and significantly poorer write performance to the log file at the same time. No wonder the overall performance was so much worst than relying on automatic checkpoints.

Now, it should be reiterated that using automatic checkpoints is not the only approach to better performance. It is not even the most optimal approach. I'll discuss other solutions in a future post.


Published Friday, August 17, 2007 11:57 PM by Linchi Shea

Attachment(s): checkpoints2.gif

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Brent Strange said:

Good post. I look forward to your post on other solutions. In the past I performance tested an enterprise application that was bottlenecked by checkpoint. In the end it was what it was (couldn't rewrite or tweak the code any more) and we focused on beefing up the disk i/o (better scsi backplane, faster disks). Doing so allowed our app to take on 180 TPS instead of our bottlenecked 120 TPS.

August 20, 2007 2:04 PM

Linchi Shea said:

In my two previous posts on the performance impact of frequent manual checkpoints and the I/O behavior

August 20, 2007 2:37 PM said:

Nice findings and information to get more understanding, but I would like to know what is the hardware you have used to say its a baseline for your testing. I believe for higher hardware this may not be case of issue.


August 23, 2007 4:40 AM

Linchi Shea said:

The server was a DL585 with four single core processors and with 16GB of physical memory. But the server specs are not all that relevant compared to the specs of the storage in this particular case. Let me give you some stats of the performance capacity of the storage used in the tests.  They are as follows (the test file was ~20GB):

8K random reads: ~10,000 IOps

8K random writes: ~5,000 IOps

8K sequential reads: ~14,000 IOps

8K sequential writes: ~7,000 IOps

128K sequential reads: ~200MBps

128K sequential writes: ~200MBps

Let me know if this gives you enough info.

August 23, 2007 11:31 AM

Linchi Shea said:

In my previous posts ( 1 , 2 , 3 ), I focused on the performance behavior of setting the checkpoints

August 23, 2007 3:47 PM

Leave a Comment


About Linchi Shea

Checking out SQL Server via empirical data points

This Blog


Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement