THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Performance impact: synchronous audit can trash your database performance

The SQL Server Audit feature introduced in SQL Server2008 is useful in many scenarios. Although it still has much room for improvement, it has made SQL Server auditing easier to manage, and in particular, it has reduced performance impact as compared with a similar audit setup using SQL Trace.

However, this does not mean that you should ignore performance considerations when configuring SQL Server Audit. In this post, I give you some empirical evidence to show that if you configure it incorrectly, you can still trash your database performance. In particular, I’ll focus on the impact of configuring the audit property called “queue delay”.

When you configure an audit object using SSMS Object Explorer, you can set the property in the textbox labeled as “Queue delay (in milliseconds)”. If you use the CREATE SERVER AUDIT statement, the option in the WITH clause is called Queue_Delay. By default, it is set to 1000ms. Generally speaking, the default setting is good enough, and you should not change it. At least, I have not found a case where a different value offers visible advantage.

But you can do serious damage if you set the queue delay property to 0, i.e. setting the audit to log data synchronously without any wait.

To demonstrate the damage, I ran a series of tests on a small server with two processors running SQL Server 2008 R2. The workload was OLTP, primarily on three user tables: customer, orders, and stock. I’ll skip the details about the workload as they do not change the general message of this post. If you are interested, I can give you all the details over email.

I tested three cases, all with 10 simulated users submitting transactions non-stop (i.e. without any wait between two consecutive transaction submissions for each user):

1.       Without any audit setup,

2.       With an audit tracking all the SELECT statements on the three tables. The queue delay was set to 1000ms (the default value),

3.       With an audit tracking all the SELECT statements on the three tables. The queue delay was set to 0ms

The following table summarizes the test results:

Test Case

Throughput (transactions/second)

Total CPU usage











First thing to note is that when the audit was configured and enabled with no queue delay at all, the negative performance impact was dramatic, causing the throughput to drop from ~3500 transactions/second (when no audit was enabled) to a mere ~160 transactions/second. In fact, when I first had the test going with no audit and the throughput was staying at 3500 transactions/second, and when I enabled the no-delay audit, I saw the throughput drop like a stone on the chart.

When there was no audit, the system was completely CPU bound with the total CPU usage pegged at ~97% and a processor queue was observed. When the no-queue-delay audit was enabled, the waits were observed to be primarily on the wait type XE_SERVICES_RWLOCK, and the total CPU usage dropped to ~8%. The system spent most of its time waiting on some extended event locks.

By the way, when I checked the exposed SQL Server performance counters under SQLServer:Wait Statistics, I saw all waits to be 0. In other words, there was no wait as far as the perfmon counters were concerned. Apparently, the XE waits were not reported under SQLServer:Wait Statistics in SQL Server 2008 R2.

From the above result table, it’s clear that when an audit with the queue_delay set to 1000ms was enabled, the transaction throughput dropped from 3500 to 3100, about 11% decrease in performance. This can be considered significant. However, it’s worth noting that I was testing a rather extreme performance scenario in which the total CPU usage was already driven to almost 100% and there was a clear and sustained processor bottleneck.

In real world scenarios, you probably should not turn on any audit to track busy activities when the system is already terribly bottlenecked on the processors. Good news is that even when the system is under extreme CPU load, enabling an audit (with the queue_delay set to 1000ms) that does not actually track a lot of data appears to have very little adverse performance impact. For instance, I re-configured the database audit specification to track all the DELETE statements (instead of SELECT) on the same three tables. Since the test workload did not perform any DELETE, the audit did not actually record any audit data. The throughout did seem to drop slightly to around 3450 transactions/second when this audit was enabled, a rather insignificant drop within the margin of observation error.

I’ll follow up with the test results that show the performance impact when the system under test is not bottlenecked on the processors.

I hope the message is loud and clear: do not configure an audit with the queue-delay property set to 0.


Published Sunday, December 11, 2011 6:26 PM by Linchi Shea
Filed under: ,



Linchi Shea said:

In my previous post on this subject, I showed the test results from a two-processor machine. Naturally,

December 12, 2011 10:14 PM

Linchi Shea said:

This is the third installment of my posts on the performance impact of enabling SQL Server 2008 R2 Audit.

December 13, 2011 9:52 PM

mycall said:

One option I've used was to add UPDATE/INSERT/DELETE triggers to all tables, then throw the INSERT/DELETE data (as XML) within the trigger to an Server Broker event store.  With a few threads assigned to it, the event store reader will write it to audit table (on a different computer?).  Yeah, it can get behind, although multiple readers sharding the events could keep up.

December 17, 2011 12:25 PM
New Comments to this post are disabled

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog


Privacy Statement