THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Performance Impact: Setting a Database to Read Only

It is widely quoted that if a SQL Server database is set to Read Only (using ALTER DATABASE ... SET READ_ONLY), your queries may run faster because no locking occurs in a read-only database. This all makes sense conceptually. But I have not seen any empirical data to get a feel for the extent of the impact. And it's always good to have some solid numbers to show either (1) yes, it does make a difference, or (2) no, I can't see any difference.

Now, it may be difficult to see the impact of setting a database to read only if you are looking at a single individual query. With many queries executing concurrently, I hope to see something conclusive, less so in terms of individual query performance but more so in terms of transaction throughput.

The test setup I used is as follows:

  • The SQL Server 2005 instance (build 9.00.3042) was allocated 12GB of physical memory to its buffer pool.
  • The test database was about 9GB in size.
  • The test database was populated with the generated TPC-C data (scaled for 100 warehouses).
  • Two read-only stored procedures corresponding to the two read-only TPC-C transactions (Order Status and Stock Level) were run against the database. The calls were evenly distributed between these two procedures.
  • A series of tests with different number of concurrent users were run. The number of concurrent users simulated were 5, 10, 20, 50, 100, 200, 400, and 600.

Note that the stored procedure calls were randomly distributed among the database pages, and after an initial ramp up, all the pages were cached in memory. This eliminated disk I/O as a performance factor.

The same tests were repeated for the following two scenarios:

Scenario 1: Database set to READ_WRITE
Scenario 2: Database set to READ_ONLY

Since we are interested only in the relative performance difference instead of absolute performance numbers, the other details of the test server (e.g. the server model) are less relevant, and I have left them out.

The following chart summarizes the test results:

These tests were repeated many times, and the result pattern was consistent in that at a sufficiently high level of load, the read-only database achieved about 8% higher transaction throughput than did the read-write database.

Three observations are worth noting. First, looking at the usual performance metrics such as perfmon counters, waits, and DMVs, I didn't find any direct way to unmistakably attribute the performance difference in the transaction throughput to the fact that no locking occurs in the read-only database.

The second observation is that although the difference in the transaction throughput is consistent and significant, it is less telling if we look at the transaction response time. For instance, at the 200-user load level, the average response time of the Stock Level transaction was 26 milliseconds with the read-write database and 24~25 milliseconds with the read-only database. I would not feel comfortable to draw any conclusion based on the response time alone.

Finally, at the lower load levels (e.g. with users 5~50), the performance difference between the read-only database and the read-write database was not significant. For instance, with 50 users, the transaction throughput was ~2921/sec with the read-write database, compared to about ~3039/sec with the read-only database.

Published Monday, October 01, 2007 2:50 PM by Linchi Shea
Filed under: ,

Attachment(s): ReadOnly.gif

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Alexander Kuznetsov said:

Very interesting, Linchi! Did you benchmark with different isolation levels such as SERIALIZABLE vs. READ COMMITTED?

October 1, 2007 2:39 PM
 

Alexander Kuznetsov said:

One more question: did you notice any difference in CPU consumption?

October 1, 2007 2:45 PM
 

Linchi Shea said:

Alex;

No, I did test different isolation levels. Note that the test queries were read only so hopefully the ioslation level shouldn't matter.

I didn't have any wait time between the calls. The calls were basically being fired to the server non-stop. So with more users than the number of cores (16 in this case), the CPUs were driven to 100% regardless whether the database was set to read only or not. This was a bit extreme, but was done on purpose.

October 1, 2007 3:13 PM
 

Alexander Kuznetsov said:

Hi Linchi,

Even from a single connection I can observe how CPU depends on isolation level. Consider the following repro:

CREATE TABLE WideTable(i INT NOT NULL PRIMARY KEY, Filler CHAR(4090))

GO

DECLARE @i INT

SET @i = 0

WHILE @i < 10000 BEGIN

 INSERT INTO WideTable(i, Filler) VALUES(@i, 'zz')

 SET @i = @i + 1

END

GO

CREATE PROCEDURE SelectReadUncommitted10K

AS

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

DECLARE @i INT, @c INT

SET @i = 0

WHILE @i < 10000 BEGIN

 SELECT @c = COUNT(*) FROM WideTable WHERE i BETWEEN 17 AND 53 AND Filler LIKE 'z%'

 SET @i = @i + 1

END

GO

CREATE PROCEDURE SelectReadCommitted10K

AS

SET TRANSACTION ISOLATION LEVEL READ COMMITTED

DECLARE @i INT, @c INT

SET @i = 0

WHILE @i < 10000 BEGIN

 SELECT @c = COUNT(*) FROM WideTable WHERE i BETWEEN 17 AND 53 AND Filler LIKE 'z%'

 SET @i = @i + 1

END

GO

CREATE PROCEDURE SelectRepeatableRead10K

AS

SET TRANSACTION ISOLATION LEVEL REPEATABLE READ

DECLARE @i INT, @c INT

SET @i = 0

WHILE @i < 10000 BEGIN

 SELECT @c = COUNT(*) FROM WideTable WHERE i BETWEEN 17 AND 53 AND Filler LIKE 'z%'

 SET @i = @i + 1

END

GO

CREATE PROCEDURE SelectSerializable10K

AS

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

DECLARE @i INT, @c INT

SET @i = 0

WHILE @i < 10000 BEGIN

 SELECT @c = COUNT(*) FROM WideTable WHERE i BETWEEN 17 AND 53 AND Filler LIKE 'z%'

 SET @i = @i + 1

END

GO

--- turned profiler on and ran several times to have caches warm

SelectReadUncommitted10K

-- CPU: 500

go

SelectReadCommitted10K

-- CPU: 719

go

SelectRepeatableRead10K

-- CPU: 797

go

SelectSerializable10K

-- CPU: 765

go

What do you think?

October 1, 2007 5:18 PM
 

Uri Dimant said:

Sasha

I used SET STATISTICS TIME ON to test your script ant it shows almost the same numbers for CPU (~540)

October 2, 2007 1:23 AM
 

Alexander Kuznetsov said:

Uri,

Because SET STATISTICS TIME ON generates a lot of text output, it ran much longer on my server. I had to add PRINT '##' to quickly search through the clutter. Still I noticed some difference in CPU:

SET STATISTICS TIME ON

GO

SelectReadUncommitted10K

PRINT '##'

/*

SQL Server Execution Times:

  CPU time = 844 ms,  elapsed time = 1247 ms.

##

*/

go

SelectReadCommitted10K

PRINT '##'

/*

SQL Server Execution Times:

  CPU time = 1109 ms,  elapsed time = 1302 ms.

##

*/

go

SelectRepeatableRead10K

PRINT '##'

/*

SQL Server Execution Times:

  CPU time = 1266 ms,  elapsed time = 1347 ms.

##

*/

go

SelectSerializable10K

PRINT '##'

/*

SQL Server Execution Times:

  CPU time = 1219 ms,  elapsed time = 1317 ms.

##

*/

go

I think that Profiler is better for such benchmarking, because it is less intrusive.

October 2, 2007 8:37 AM
 

Uri Dimant said:

Yep, these numbers are always at the botom of output. Actually , I tried using Profiler and  do get some differences

October 2, 2007 9:39 AM
 

Linchi Shea said:

Alex;

I was focusing on whether setting a database to Read Only would have any impact on the transaction throughput of a specific workload instead of the impact of different isolation levels. But the question you raised makes me wonder whether the impact of a Read-Only database is significantly influenced by the isolation level setting. For the tests that produced the data points in the chart, the isolation level was fixed to Read Committed. I'll check what, for instance, Repeatable Read may bring. Thanks!

October 2, 2007 12:41 PM
 

Shailesh Khanal said:

Linchi

I was doing similar test using Quest BenchMark Factory tool, but between READ COMMITTED SNAPSHOT ON and OFF settings. My test shows that with READ COMMITTED SNAPSHOT ON, the Transactions / Sec flat lines after 20 users, where as with this setting OFF, the number increases linearly. I ramped up users upto 100 users.

The test runs pure SELECT statement, so there is no row version traversing. I am trying to get a reasonable explanation for this behavior but can't find anything.

October 3, 2007 12:52 PM
 

Linchi Shea said:

Shailesh;

I'll see if I can reproduce the behavior you described. I don't have any immediate explanation for it, but will look for clue if I can reproduce it.

By any chance, did you check the query plan to see whether it's the same in both cases?

October 3, 2007 1:20 PM
 

Shailesh Khanal said:

Yes it is, it is a simple SELECT run multiple times from multiple connections. It's called from a stored procedure with different parameters.

SELECT  Model,AVG(Price),MIN(Price),MAX(Price),COUNT(*)

FROM    SH_Product

WHERE Project_Number = @Station

AND     EmployeeID      =       0

AND     Type            =       @Match100

GROUP BY Model

ORDER BY Model

There is a unique non clustered index on Project_Number and EmployeeID.

Here is a script to create tables and indexes, I have a million rows in the table.

CREATE TABLE [dbo].[SH_Product](

[Project_Number] [int] NOT NULL,

[Model] [int] NOT NULL,

[ProductID] [numeric](18, 0) NOT NULL,

[Product_Name] [char](32) NOT NULL,

[Price] [money] NOT NULL,

[Class] [int] NOT NULL,

[Evaluations] [char](20) NOT NULL,

[Description] [char](50) NOT NULL,

[EmployeeID] [int] NOT NULL,

[Type] [int] NOT NULL,

[VendorID] [int] NOT NULL

) ON [PRIMARY]

GO

CREATE UNIQUE NONCLUSTERED INDEX [SH_Product_first] ON [dbo].[SH_Product]

(

[ProductID] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

CREATE CLUSTERED INDEX [SH_Product_prime] ON [dbo].[SH_Product]

(

[Project_Number] ASC,

[EmployeeID] ASC,

[Class] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

October 3, 2007 3:14 PM
 

Linchi Shea said:

In response to my previous blog post-- Performance Impact: Setting a Database to Read Only , Shailesh

October 5, 2007 12:05 AM
 

davetiye said:

güzel davetiye sözleri ve davetiye metinleri

November 5, 2009 9:14 AM
 

Iain said:

I wonder if any improvement would be achieved by setting the volume to readonly as well.

July 15, 2013 11:24 AM

Leave a Comment

(required) 
(required) 
Submit

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement