In response to my previous blog post--Performance Impact: Setting a Database to Read Only, Shailesh Khanal mentioned that he observed significant performance degradation from READ COMMITTED SNAPSHOT OFF to ON for a read-only workload. This is counter intuitive since there is nothing in the version store if only SELECT statements are running, and the overhead of turning READ COMMITTED SNAPSHOT ON without maintaining any versions shouldn't be that high.
So I decided to conduct some tests myself with a different read-only workload on SQL Server 2005 SP2 (9.00.3042) Enterprise Edition. The read-only workload was the same as that described in the previous blog post--Performance Impact: Setting a Database to Read Only.
My results were mixed. On one hand, I confirmed Shailesh Khanal's observation that turning READ COMMITTED SNAPSHOT ON caused significant performance degradation. It's interesting to note that performance degradation was all observed on 64-bit 16-way machines. On the other hand, I didn't always observe heavy performance penalty when turning READ COMMITTED SNAPSHOT ON. On a 32-bit 4-way machine, I didn't see any degradation at all. The following two charts summarize my results:
In Fig 1, the performance penalty of READ_COMMITTED_SNAPSHOT for a read-only workload was astounding. Just by turning READ_COMMITTED_SNAPSHOT OFF, the transaction throughput gained a whopping ~60%. This was observed in repeated test runs, and on two different platforms. The common characteristics of these two platforms included: (1) the SQL Server 2005 instances were both x64 running on Windows 2003 x64, and (2) the servers both had 16 cores.
That's bad news. But there was good news. On a different platform, repeating the same test led to very different results as summarized in Fig 2 above. If there is any difference between the two lines in Fig 2, turning READ_COMMITTED_SNAPSHOT ON appears to have produced slightly better throughput, though the difference should probably be considered insignificant and within the margin of error. This test platform was 32-bit SQL Server 2005 running on 32-bit Windows 2003, and the server had 4 cores.
Now, I don't know the test environment that Shailesh Khanal used to obtain his results. But from my own tests, I'd venture a conjecture that the performance difference could be attributed to the difference between 64-bit and 32-bit, not the hardware difference between 4 cores and 16 cores. And I'd further venture to suggest that the performance degradation from READ_COMMITTED_SNAPSHOT OFF to ON in the case of 64-bit and read-only workload was most likely due to a bug because the degradation was simply too large to be a feature or by design.
So is there any practical significance to all this?
Clearly, you need to use READ_COMMITTED_SNAPSHOT with caution in performance tuning. Best practice recommends that you consider using it if contention is costing you throughput. For read-only workloads, there is no reader/writer contention for READ_COMMITTED_SNAPSHOT to help avoid, and there is no statement-level data consistency problem for it to prevent.
The other lesson is that you really can't assume anything, even if it intuitively makes sense. You should always confirm it through testing, if possible.
Finally, there could be a scenario where you have a read/write workload, and READ_COMMITTED_SNAPSHOT improves its performance by resolving your reader/write contention problem, but at the same time degrades the performance of its read-only queries. Overall, you may or may not see a net gain. Again, you'll have to run tests to be sure one way or another.