"Should I keep the database page checksum and the default trace enabled on SQL Server 2005?" That is a frequently-asked best practice question. Out of box, these two options are enabled. Although you can argue about the pros and cons of enabling or turning off these options, it is mostly an empirical question that should be answered with solid test data.
My instinct is to let these two options stay because of the potential benefits they provide. But when I was conducting some load tests recently, I noticed that for some load levels, SQLTrace_Buffer_Flush accounted for a significant percentage of the total resource waits. Since the SQL Server 2005 default trace was the only trace running, so it was responsible for the observed waits. Naturally, that got me a bit worried! So I decided to find out its performance impact through systematic tests. And while I was at it, I threw the database page checksum into the mix to provide more data points.
So, I wanted to test the following four configuration settings with everything else being equal and with identical workloads:
- Neither the database page checksum and the default trace was enabled,
- Only the database page checksum was enabled,
- Only the default trace was enabled, and
- Both the database page checksum and the default trace were enabled
For the test workloads, I used a TPC-C benchmark kit (no surprise if you read some of my earlier blog entries), but calibrated it to create the following two test scenarios:
- I/O bound workloads at various load levels where the load level was controlled by the number of simulated users,
- Balanced workloads for 100 simulated users. These workloads were balanced (or more balanced) in terms of their pressure on CPU, I/O, and memory, and were created by adding a separate 30-user CPU-bound workload to the 100-user I/O bound workload. The CPU-bound workload was introduced by running the read-only transactions to target a subset of the database that was smaller than the amount of physical memory given to the SQL Server 2005 instance.
The second test scenario was included because I didn't expect the two options to have much performance impact with an I/O bound workload.
The chart on the left summarizes the results from the first test scenario:
As you can see, with the I/O workloads, I didn't see any significant performance difference in the measured transaction throughput among the four configurations. And this was the case regardless of the load level.
Okay, how about a more balanced workload? The chart on the right shows the results of the second workload scenario where CPU, memory, and I/O were all stressed, though the load level was carefully controlled so that none was overly stressed (e.g. the four processors were kept at about 80% utilization).
Again, there was no significant performance difference in any of the four configurations.
This is very good news. If these results didn't lie and you have no specific counter reason, I'd suggest you leave the two options turned on.
I should mention that, in addition to the charts shown here, many more tests were actually performed. The results were similar. I tried hard to find a workload that might be sensitive to these two configuration options, and I have so far failed. That is not to say that you can't find one either.
So what's the story with the SQLTrace_Buffer_Flush waits I was worried about? It turns out that when SQL Server 2005 isn't stressed, there aren't many other resource waits, and the SQLTrace_Buffer_Flush waits stand out. It is reassuring to know that as the load level increases, the percentage of resource waits that SQLTrace_Buffer_Flush accounts for decreases.