In my previous post on the performance impact of controller cache configurations, I presented some empirical results showing the performance impact of configuring the controller cache to various read/write settings on large sequential I/Os.
Why did I single out large sequential I/Os? That's because large sequential I/Os are heavily used by SQL Server in performing such critical operations as bulk loads, table scans, and checkpoints, to name a few. Since you are most likely a SQL Server professional, a natural question is: do the results in the previous post really translate into anything relevant to SQL Server performance? Conceptually, the answer is of course yes.
But nothing beats seeing it directly. So I ran the following four SQL Server tests:
A C# program was written to load data into the customer table using the System.Data.SqlClient.SqlBulkCopy class. The batch size for the bulk load was set to 50,000. The data was generated in the program per the TPC-C specifications for the customer table.
Full table scan -- SELECT
After the clean buffer pages were dropped, the following query was run against the customer table:
SELECT avg(c_balance) FROM customer;
Note that there is no index on the column c_balance.
Full table scan -- UPDATE
After the clean buffer pages were dropped, the following update was run against the customer table:
SET c_data = 'a' + substring(c_data, 2, len(c_data))
There is no index on the c_data column.
The 'recovery interval' sp_configure option was set to 32767 for the all tests mentioned in this post, effectively disabling automatic checkpoints. A manual checkpoint is issued after the UPDATE statement described above. This test measures the duration of that checkpoint.
All the tests were performed on the customer table in the TPC-C benchmark, whose DDL is included in the appendix. The customer table was populated with 3,000,000 rows, that took up about 2GB of storage space.
The tests were repeated multiple times for each of the following three test scenarios:
Out of the 512MB total cache in the disk controller, 0MB was allocated for reads and 512MB for writes.
Out of the 512MB total cache in the disk controller, 128MB was allocated for reads and 384MB for writes.
Out of the 512MB total cache in the disk controller, 512MB was allocated for reads and 0MB for writes.
The test results are shown in the following four charts.
Figure 1 shows that for the bulk load benefited dramatically from controller write cache. When 512MB or 384MB was allocated to the write cache, the bulk load took about 53% of the time it took to load the same data when no write cache was allocated. From the perfmon, the bulk load was observed to cause ~64KB writes on both the data file and the log file of the test database.
Figure 2 shows that the full table scan query ran faster without any read cache in the controller. Per the perfmon counters, the table scan was doing >300KB reads.
Figure 3 shows that the full table scan update ran faster with write cache. Per the perfmon counters, the update was doing ~64KB writes to the log file. Figure 4 again shows that large sequential writes performed better with write cache because checkpoints were observed to be doing >128KB writes.
Overall, the results from these SQL Server operations are consistent with the results from pure I/Os tests described in the previous post.
Appendix: DDL for the test table customer
CREATE TABLE customer
CREATE UNIQUE CLUSTERED INDEX ci_customer
on customer(c_w_id, c_d_id, c_id);
CREATE UNIQUE NONCLUSTERED INDEX nc1_customer
on customer(c_w_id, c_d_id, c_last, c_first, c_id);