THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

T-SQL Exercise: Simplest data-loading script to produce worst query performance – May 25th Update

This is a quick update on the T-SQL exercise I posted a few days ago. The goal was to write a simple T-SQL script to generate and load 4,000,000 rows into a table so that the following query would produce the worst performance, i.e. take longest time to finish:

 

DBCC DROPCLEANBUFFERS

go

SELECT COUNT(*) FROM dbo.test;

 

The original intent was to highlight some pitfalls in data loading that may lead to bad query performance. But then I thought why take all the fun away by having too many constraints, and why not just let it loose and see how bad it can get if one is to do it intentionally.

 

I must say that just looking at the scripts from Denis, Adam, and Tibor, I came to realize that things were not quite what I had thought they were.

 

For instance, I did not anticipate that a heap table could have performed the table scan so terribly in Tibor’s script. I thought that SQL Server would apply an allocation order scan and would optimize on the physical order regardless of forward pointers. Adam’s script surprised me with a freshly created clustered index. I thought that even if you could create a clustered index to push the limit on the fillfactor, the clustered index would defragment the data enabling SQL Server to scan the table efficiently, easily outweighing whatever gains (actually performance loss) you might get from lowering the page density.

 

So predicting the query performance turned out to be not very precise at all. It’s better to actually test these scripts out. I ran Adam’s script, Tibor’s script, and my script in a database whose data file was placed on an internal RAID-1 set (i.e. two mirrored physical drives). The internal drive (which was also the system drive C) was used to avoid dealing with the confounding factors because of SAN cache, storage virtualization, and so on. In addition, the test database was given 45GB for data and 20GB for log, significantly larger than what would be required during the tests.

 

All tests were done with SQL Server 2008 Enterpries x64 Edition (build 10.0.1600) on Windows Server 2003 Enterprise x64 Edition SP2 on an old HP ProLiant DL365 G1 with four cores and 4GB of physical memory.

 

Adam Machanic’s script

 

if exists (select * from sysobjects where name = 'test')

   drop table test

go
create table test (
   x int not null,
   y char(896) not null default (''),
   z char(120) not null default('')
)

go

insert test (x)

select r

  from

(

select row_number() over (order by (select 1)) r

  from master..spt_values a, master..spt_values b

) p

where r <= 4000000

go

create clustered index ix_x on test (x, y)
  with fillfactor=51

go

 

Tibor Karaszi’s script

 

if exists (select * from sysobjects where name = 'test')

   drop table test

go
CREATE TABLE test (

   x int NOT NULL

  ,x2 int NOT NULL

  ,y char(10) NOT NULL DEFAULT ('')

  ,z char(10) NOT NULL DEFAULT('')

)

 

DECLARE @rows int = 6666666, @toKeep int = 4000000, @diff int

 

INSERT test (x, x2)

SELECT TOP(@rows)

       ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS r

      ,ROW_NUMBER() OVER (ORDER BY (SELECT 1)) % 10 AS s

  FROM master..spt_values a CROSS JOIN master..spt_values b

go

ALTER TABLE test ALTER COLUMN y char(892)

ALTER TABLE test ALTER COLUMN z char(100)

go

 

CHECKPOINT

go

 

DECLARE @rows int = 6666666, @toKeep int = 4000000, @diff int

DELETE TOP(@rows - @toKeep)
  FROM test WHERE x2 IN(2, 4, 6, 8)

go

 

Note that Tibor’s script generated 3768021 rows instead of 4,000,000 rows in my tests.

 

Linchi Shea’s script

 

if exists (select * from sysobjects where name = 'test')

   drop table test

go
create table test(
   c1 int,
   c2 int,
   c3 char(256) default ' ',
   c4 char(740) default ' ')

go

create clustered index cix_test on test(c2, c3)

go

 

set nocount on

go

declare  @i int

set @i = 1

begin tran

while @i <= 4000000

begin

      insert test(c1, c2)

      select @i,

             case when @i % 2 = 0 then @i else 4000000 - @i end

      if @i % 100000 = 0

      begin

         commit tran

         begin tran

      end

      set @i = @i + 1

end

commit tran

go

 

To ensure that when the test table was created, the database was in the same state. The test table was dropped at the beginning of each script. And at that point, the database was empty with no user data.

 

After the data was loaded by each script, the test query (including DBCC DROPCLEANBUFFERS) was run three times, and each time the query elapsed time was logged. The following table shows the recorded elapsed times.

 

Test run

Elapsed time (second)

Adam’s script – test run 1

123

Adam’s script – test run 2

121

Adam’s script – test run 3

120

 

Tibor’s script – test run 1

205

Tibor’s script – test run 2

207

Tibor’s script – test run 3

177

 

Linchi’s script – test run 1

408

Linchi’s script – test run 2

354

Linchi’s script – test run 3

358

 

Afterwards, for validation, the above entire process was repeated one more time. In other words, the test query was run six times for each data-loading script. The test query elapsed times were consistent with those recorded in the above table.

 

Now, in no way I’d claim that my script has managed to produce the worst query performance. I’m pretty sure that you can find a way to load data to produce much worse query performance.

 

In addition, as Adam and Tibor observed, the results may be dependent on many factors including the test system configurations. Although on a given test system the test results should be consistent, it could be misleading to compare the results across different test systems.

 

A case in point is that if you disable read-ahead reads entirely, the query performance would be much worse than any of the elapsed times posted above. So perhaps, your dataset can be loaded so that it induces SQL Server not to make effective use of read-ahead reads, and therefore get worse query performance.

 

I’m looking forward to seeing more scripts with worse query performance than the three scripts referenced in this post.

Published Tuesday, May 26, 2009 9:52 AM by Linchi Shea

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

TiborKaraszi said:

I was playing with an idea to somehow make the import a bulk loading operation, and toying with the batch size. I have this vague memory that you under certain circumstances end up with one extent allocated per batch. That perhaps combined with loading soe of the data in more normal way could perhaps result in lousy locality (can't find the English word here - "closeness of data") while following either IAM and fwd records for a heap or linked list for an index. But I realized I also have work to do... ;-)

May 26, 2009 10:15 AM
 

Brian Tkatch said:

This post is very interesting.

May 27, 2009 11:21 AM
 

Linchi Shea said:

This is another follow-up on the T-SQL exercise . So the test query below is rather simple: DBCC DROPCLEANBUFFERS

May 29, 2009 12:24 PM
 

Linchi Shea said:

All the previously posted results ( May 25 th and May 29 th ) were obtained with query parallelism disabled

June 7, 2009 9:32 PM
 

gfd said:

fd

December 2, 2009 3:22 AM

Leave a Comment

(required) 
(required) 
Submit

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement