In my previous post, I looked at how a typical OLTP workload may be affected by various controller cache configurations. And the conclusion was that giving too much cache (say all 512MB) to reads hurt the OLTP performance. The primary reason was that the writes from the OLTP workloads were starved of cache. Now, let's take a look at how the controller cache configurations may impact on the performance of read queries whose execution plans are dominated by index seeks (both clustered and nonclustered). The TPC-C read-only transactions happen to be queries dominated by index seeks.
As you'll see below, the index-seek dominated reads are not sensitive to controller cache configurations, at least for the test configurations described below.
The test configuration was exactly the same as what was described in the previous post except that the transaction mix was changed to the following to include only the two read-only transactions (i.e. Order Status and Stock Level).
New Order: 0%
Order Status: 50%
Stock Level: 50%
To measure the transaction throughput, I had to use a different metric because the New Order transaction was no longer executed in the tests. Instead, the transaction throughput was measured as the number of the Stock Level transaction executed per second.
Again, the three test scenarios differed only in how the controller cache was configured:
0 MB was allocated to reads and 512MB was allocated to writes
128MB was allocated to reads and 384MB was allocated to writes
512MB was allocated to reads and 0 MB was allocated to writes
The test results are shown in the following chart:
The chart shows little difference in the transaction throughput among the three test scenarios. You could argue that the scenario where 128MB was allocated to read performed the best. But the transaction throughput difference was not significant enough to justify that conclusion, although the pattern was consistent throughout the repeated test runs,
How was the I/O subsystem exercised by the read-only workloads? Let me again focus on the tests with 100 users. From a disk I/O perspective, the read-only workloads during the steady state can be characterized as follows:
On the F drive with the data file, the key LoggicalDisk perfmon counter values were as follows:
* Avg. Disk Bytes/Read = ~8366 bytes
* Avg. Disk sec/Read = 0.347
* Current Disk Queue Length = 122
* Disk Reads/sec = 446
The key Buffer Manager counter values were as follows:
* Page life expectancy = ~60
* Page reads/sec = 478
* Readahead pages/sec = 236
* Target pages = 193176
* Total pages = 193176
The Wait Statistics were heavily concentrated on Page IO latch waits.