THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Performance impact of controller cache: SQL Server OLTP workloads

Through empirical data presented in my two previous posts on the performance impact of controller cache configurations on sequential disk I/O workloads and large SQL workloads, we observed that allocating controller cache to writes would result in better performance, at least with the described workloads. Let me now turn attention to SQL Server OLTP workloads, and check how the controller cache configurations may impact the transaction throughput of an archetypal OLTP workload--TPC-C in its standard transaction mix.

 

As you'll see below, the same observation still holds.

 

The best way to understand TPC-C is to read its specifications at http://www.tpc.org/. But briefly, it models after the data processing of a business that sells items to customers in multiple districts. The customers may place orders for items, and the business maintains items in stock in multiple warehouses. The database keeps track of these business activities. So you'll find a customer table, an order table, an item table, a stock table, and so on. And like any real business, there are certain constraints among these tables so that you can't just scale one table without also scaling some other tables. In fact, the number of rows in most tables--thus the size of the database--is ultimately determined by the number of warehouses. So, if you want to test a larger database, you specify a larger number of warehouses.

 

For my tests, I sized the test to 100 warehouses that resulted in about 9GB of allocated storage space. And to ensure that the disk I/Os would be heavily exercised, I set the SQL Server buffer pool to 2GB.

 

The data file was placed on the F drive and the log file was placed on the G drive. F and G were two individual 72GB 10,000rmp SAS disk drives that shared the same HP Smart Array P400 controller with 512MB for cache.

 

The stored procedures executed against these tables model five typical business activities: (1) placing a new order, (2) making a payment, (3) checking the order status, (4) making a delivery (i.e. fulfilling an order), and (5) checking the stock level. Among these stored procedures, order status and stock level are read only; the others are read/write. The transaction throughput metric is the number of new orders processed.

 

The standard transaction mix is as follows:

 

 

  • New Order:        45%

  • Payment:          43%

  • Order Status:     4%

  • Delivery:         4%

  • Stock Level:      4%

  •  

    This simply means that, for instance, 45% of the time the client program submits the new order transaction (i.e. calling the new order stored procedure). Strictly speaking, what I ran can't be called TPC-C because not all the stringent TPC-C rules were followed. For instance, to simplify the tests, I did not leave any think time or wait time between consecutive database calls. But that works fine for this post since I just wanted to apply an OLTP workload to the test system and I'm not here to publish any official TPC-C results.

     

    The test results are shown in the following chart:

     

     

    Note that what's important in this case is the relative throughputs among the three different test scenarios, and these three test scenarios were different only in how the controller cache was configured. The absolute throughput numbers do not mean anything.

     

    So as suspected, giving all 512MB of the controller cache to reads led to a significantly lower throughput, compared to the scenarios where a large portion of the cache (384MB or 512MB) was given to writes.

     

    How was the I/O subsystem exercised by these workloads? Let me focus on the tests with 100 users. From a disk I/O perspective, the OLTP workloads during the steady state can be characterized as follows:

    • On the F drive with the data file
      I recorded the following perfmon counters:

      * Avg. Disk Bytes/Read = ~8KB, and Avg. Disk Bytes/Write = ~8KB
      * Avg. Disk sec/Read = ~275ms, and Avg. Disk sec/Write = ~320ms

      So 8KB pages were being read from the data file into the buffer pool randomly, and the lazy writer was writing 8KB pages out to the data file randomly. And these I/O requests saturated the F drive. Note that during the 3 minute test duration, one checkpoint was issued manually.
       

    • On the G drive with the log file
      I recorded the following perfmon counters:

      * Avg. Disk Bytes/Write = ~4KB
      * Avg. Disk sec/Write = ~28ms

      So SQL Server was writing to the log in ~4KB blocks, and the G drive was pushed hard.

    If we look at the waitstats obtained during the test run, not surprisingly the waits were heavily concentrated on Page IO latch wait.

    Published Saturday, March 29, 2008 9:39 PM by Linchi Shea

    Attachment(s): sql_oltp.gif

    Comment Notification

    If you would like to receive an email when updates are made to this post, please register here

    Subscribe to this post's comments using RSS

    Comments

     

    jchang said:

    what was the data read/sec and write/sec, both away from checkpoint and during check points?

    I had could only get SQL to drive 400 IOPS per 15K disk at high queue depth, from IOMeter, I could drive the 15K drive to 600 random IOPS at 5% space usage

    Was G drive on the same controller as F?

    I am thinking 70 tps is not much of a log load, but that the delays could be congestion on the controller if shared with data

    March 29, 2008 8:53 PM
     

    Linchi Shea said:

    Both drives shared the same controller whose cache were configured for these tests. I'll answer your other questions later as I need to check my notes and/or run some more tests just to see the reads/sec and writes/sec numbers again.

    March 29, 2008 9:10 PM
     

    Jason said:

    Linchi,

    Do you have any reason to suspect the numbers would be different with a higher read load particularly if there is a memory bottleneck?

    I ask because I have always left the HP default of 50\50 if there was  memory pressure. Although, I never had any data to back this up. Did you happen to get page life and batch requests\sec?

    JM

    March 30, 2008 1:50 PM
     

    Linchi Shea said:

    The numbers would be different, but I don't think the over trend would be different as long as the bottleneck doesn't shift. If the bottleneck shifts to memory because of the changes in the workload characteristics, then all bets are off. In that, changing the disk controller configuration wouldn't matter (unless of it helps to shift the bottleneck).

    I'll post what the typical perfmon counter values look like with 100 concurrent users.

    March 30, 2008 8:34 PM
     

    Linchi Shea said:

    I re-ran the 100 user test, and jotted down the following perfmon counter values:

    On the F drive (the data drive):

    Disk Reads/sec = 327 and Disk writes/sec = 140 (disk writes are mainly caused by lazy writes)

    On the G drive (the log drive):

    Disk Writes/sec = 125

    Page Life Expectancy = 116 ~ 200

    Batches/sec = 32 ~ 450

    Now, during this test run, I did see Disk Reads/sec went over 400, but it did not sustain.

    March 31, 2008 12:08 AM
     

    Linchi Shea said:

    Just want to be clear. The perfmon counter values I posted above were not average or even meant to be representative. The values went up and down, and I made no effort to capture the average. I simply froze the perfmon screen after all 100 users were runnign for a while, and read these values off the frozen perfmon screen.

    March 31, 2008 12:12 AM
     

    Linchi Shea said:

    > I had could only get SQL to drive 400 IOPS per 15K disk

    > at high queue depth, from IOMeter

    Joe;

    Here's an output from sqlio.exe running 8K random reads at a queue depth of 256 on the 72GB 10,000rmp SAS drive. It reached 491 IOPS.

    E:\SQLIO>call sqlIO.exe -kR    -s30  -o256 -frandom -b8 -BH -LS -Fparam_F.txt

    sqlio v1.5.SG

    using system counter for latency timings, -1894817296 counts per second

    parameter file used: param_F.txt

    file f:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)

    2 threads reading for 30 secs from file f:\testfile.dat

    using 8KB random IOs

    enabling multiple I/Os per thread with 256 outstanding

    buffering set to use hardware disk cache (but not file cache)

    using specified size: 10000 MB for file: f:\testfile.dat

    initialization done

    CUMULATIVE DATA:

    throughput metrics:

    IOs/sec:   491.20

    MBs/sec:     3.83

    latency metrics:

    Min_Latency(ms): 4

    Avg_Latency(ms): 1021

    Max_Latency(ms): 2750

    histogram:

    ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+

    %:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100

    March 31, 2008 1:06 AM
     

    Linchi Shea said:

    Joe;

    What was the test file size you used in the tests where you could not reach 400 IOPS?

    March 31, 2008 1:07 AM
     

    Linchi Shea said:

    In my previous post , I looked at how a typical OLTP workload may be affected by various controller cache

    March 31, 2008 11:49 PM

    Leave a Comment

    (required) 
    (required) 
    Submit

    About Linchi Shea

    Checking out SQL Server via empirical data points

    This Blog

    Syndication

    Powered by Community Server (Commercial Edition), by Telligent Systems
      Privacy Statement