THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SELECT Hints, Tips, Tricks FROM Hugo Kornelis WHERE RDBMS = 'SQL Server'

Curious cursor optimization options

The best way to optimize performance of a cursor is, of course, to rip it out and replace it with set-based logic. But there is still a small category of problems where a cursor will outperform a set-based solution. The introduction of ranking functions in SQL Server 2005 has taken a large chunk out of that category – but some remain. For those problems, it makes sense to investigate the performance effects of the various cursor options.

 

I am currently preparing a series of blog posts on a neat set-based solution I found for a problem that screams “cursor” from all corners. But in order to level the playing field, I figured that it would be only fair to optimize the hell out of the cursor-based solution before blasting it to pieces with my set-based version. So I suddenly found myself doing something I never expected to do: finding the set of cursor options that yields the best performance.

 

That task turned out to be rather time-consuming, as there are a lot of cursor options that can all be combined in a huge number of ways. And I had to test all those combinations in various scenarios, like reading data in a variety of orders, and updating data in two separate ways. I won’t bore you with all the numbers here; instead, I intend to point out some highlights, including some very curious finds. For your reference, I have included a spreadsheet with the results of all test as an attachment to this post.

 

Disclaimer: All results presented here are only valid for my test cases (as presented below) on my test data (a copy of the SalesOrderDetail table in the AdventureWorks sample database), on my machine (a desktop with 2GB of memory, a dual-core processor, running SQL Server 2005 SP2), and with my workload (just myself, and only the test scripts were active). If your situation is different, for instance if the table will not fit in cache, if the database is heavily accessed by competing processes, or if virtually any other variable changes, you really ought to perform your own test if you want to squeeze everything out of your cursor. And also consider that many options are included to achieve other goals than performance, so you may not be able to use all options without breaking something.

 

Reading data

 

Many cursors are used to create reports. The data read is ordered in the order required for the report, and running totals and subtotals are kept and reset as required while reading rows. Those already on SQL Server 2005 can often leverage the new ranking functions to calculate the same running totals without the overhead of a cursor, but if you are still stuck on SQL Server 2000 or if you face a problem that the ranking functions can’t solve, you may find yourself preferring a cursor over the exponentially degrading performance of the correlated subquery that the set-based alternative requires.

 

Since the order of these cursors is dictated by the report requirements rather than the table and index layout, I decided to test the three variations you might encounter – you may be so lucky that the order of the report matches the clustered index, or you might find that a nonclustered index matches the order you need, or you may be so unlucky that you need to order by a column that is not indexed.

 

I used the code below for my performance tests. You can run this code as is on the AdventureWorks sample database, or you can do as I did and copy the Sales.SalesOrderDetail table, with all indexes and all data, to your own testing database.

 

-- Keep track of execution time

DECLARE @start datetime;

SET @start = CURRENT_TIMESTAMP;

 

-- Declare and initialize variables for cursor loop

DECLARE @SalesOrderID int,

        @SalesOrderDetailID int,

        @OrderQty smallint,

        @ProductID int,

        @LineTotal numeric(38,6),

        @SubTotal numeric(38,6);

SET @SubTotal = 0;

 

-- Declare and init cursor

DECLARE SalesOrderDetailCursor

  CURSOR

    LOCAL           -- LOCAL or GLOBAL

    FORWARD_ONLY    -- FORWARD_ONLY or SCROLL

    STATIC          -- STATIC, KEYSET, DYNAMIC, or FAST_FORWARD

    READ_ONLY       -- READ_ONLY, SCROLL_LOCKS, or OPTIMISTIC

    TYPE_WARNING    -- Inform me of implicit conversions

FOR SELECT   SalesOrderID, SalesOrderDetailID,

             OrderQty, ProductID, LineTotal

    FROM     Sales.SalesOrderDetail

    ORDER BY SalesOrderID, SalesOrderDetailID; -- Match clustered index

--    ORDER BY ProductID;                      -- Match nonclustered index

--    ORDER BY LineTotal;                      -- Doesn’t match an index

 

OPEN SalesOrderDetailCursor;

 

-- Fetch first row to start loop

FETCH NEXT FROM SalesOrderDetailCursor

      INTO @SalesOrderID, @SalesOrderDetailID,

           @OrderQty, @ProductID, @LineTotal;

 

-- Process all rows

WHILE @@FETCH_STATUS = 0

BEGIN;

 

  -- Accumulate total

  SET @SubTotal = @SubTotal + @LineTotal;

 

  -- Fetch next row

  FETCH NEXT FROM SalesOrderDetailCursor

        INTO @SalesOrderID, @SalesOrderDetailID,

             @OrderQty, @ProductID, @LineTotal;

 

END;

 

-- Done processing; close and deallocate to free up resources

CLOSE SalesOrderDetailCursor;

DEALLOCATE SalesOrderDetailCursor;

 

-- Display result and duration

SELECT @SubTotal;

SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

go

 

The first surprise came straight when I set my baseline by commenting out all options of the DECLARE CURSOR statement. The execution time when ordering by the clustered index was 6.9 seconds; when ordering by a nonclustered index it was 9 seconds – but when ordering by an unindexed column, the cursor with default options turned out to be faster, at only 6.4 seconds. I later found the reason for this to be that the first two defaulted to a relatively slow dynamic cursor, whereas the latter used the faster technique of a keyset cursor.

 

Choosing LOCAL or GLOBAL had no effect on cursor performance. This was as expected, since this option only controls the scope of the cursor, nothing else. For this reason, I excluded this option from testing the variants for updating with a cursor.

 

I didn’t see any difference between the FORWARD_ONLY and SCROLL options either. This came as a surprise, since FORWARD_ONLY exposes only a subset of the functionality of the SCROLL version. I really expected SQL Server to be able to do some clever optimization if it knew that I’d never read in any other direction than from the first to the last row. I’m really wondering why the FORWARD_ONLY option is not deprecated, seeing that there is no advantage at all in specifying it – but maybe the development team in Redmond knows something I don’t?

 

The static, keyset, and dynamic cursors performed exactly as expected – in all cases, the static cursor was the fastest, the keyset came second, and the dynamic cursor finished last. No surprises here – until I started my tests with the cursor that orders by an unindexed column. In these tests, SQL Server informed be (due to the TYPE_WARNING option) that the created cursor was not of the requested type. It did not tell me what type it did create, nor why it disregarded the requested options. I failed to see anything in Books Online to explain this behavior, so I filed a bug for this. This did explain why the “hardest” sort option was the fastest when running with default options – since a dynamic cursor was not available, this one had to use a keyset cursor instead.

 

My biggest surprise came when I tested the FAST_FORWARD option. According to Books Online, this option “specifies a FORWARD_ONLY, READ_ONLY cursor with performance optimizations enabled”, so I expected performance to be at least on par with, and probably better than that of a STATIC FORWARD_ONLY READ_ONLY cursor – but instead, the FAST_FORWARD option turned out to be consistently slower, in some cases even by 15%!

 

The last set of options, the ones specifying the locking behavior, turned out to depend on the chosen cursor type. For a static cursor, the two available options made no difference. For other cursors, READ_ONLY was best – but SCROLL_LOCKS was second for keyset cursors and third for dynamic cursors, and OPTIMISTIC was second for dynamic and third for keyset. Go figure.

 

Based on all tests, it turns out that the best performance is achieved by specifying a STATIC cursor. I would add the LOCAL, FORWARD_ONLY, and READ_ONLY options for documentation purposes, but they make no performance difference. With these options, execution time went down from 6.3 to 9 seconds (depending on the ORDER BY) to 3.3 to 3.4 seconds. Of course, none of those come even close to the 0.2 seconds of the set-based equivalent for this test case:

 

-- Keep track of execution time

DECLARE @start datetime;

SET @start = CURRENT_TIMESTAMP;

 

-- Calculate and display result

SELECT SUM(LineTotal)

FROM   Sales.SalesOrderDetail;

 

-- Display duration

SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

go

 

Modifying data

 

Another scenario in which cursors are used is when data has to be updated, and the calculation to determine the new data is thought to be to complicated for a set-based approach. In those cases, a cursor is used to process the rows one by one, calculate the new data, and update the data with the calculation results.

 

If you specify the FOR UPDATE clause in the cursor declaration, you can use the WHERE CURRENT OF clause of the UPDATE command to update the last row fetched. Of course, you can also omit the FOR UPDATE clause and use a regular UPDATE statement, using the primary key values of the row just read to find the row to update.

 

Since I expected a FOR UPDATE cursor to be optimized for updating the last row fetched, I first tested its performance, by using this code:

 

-- Enclose in transaction so we can roll back changes for the next test

BEGIN TRANSACTION;

go

 

-- Keep track of execution time

DECLARE @start datetime;

SET @start = CURRENT_TIMESTAMP;

 

-- Declare and initialize variables for cursor loop

DECLARE @SalesOrderID int,

        @SalesOrderDetailID int,

        @OrderQty smallint,

        @ProductID int,

        @LineTotal numeric(38,6);

 

-- Declare and init cursor

DECLARE SalesOrderDetailCursor

  CURSOR

    LOCAL           -- LOCAL or GLOBAL makes no difference for performance

    FORWARD_ONLY    -- FORWARD_ONLY or SCROLL

    KEYSET          -- KEYSET or DYNAMIC

                    --    (other options are incompatible with FOR UPDATE)

    SCROLL_LOCKS    -- SCROLL_LOCKS or OPTIMISTIC

                    --    (READ_ONLY is incompatible with FOR UPDATE)

    TYPE_WARNING    -- Inform me of implicit conversions

FOR SELECT   SalesOrderID, SalesOrderDetailID,

             OrderQty, ProductID, LineTotal

    FROM     Sales.SalesOrderDetail

    ORDER BY SalesOrderID, SalesOrderDetailID

FOR UPDATE          -- FOR UPDATE or FOR UPDATE OF OrderQty

    ;

 

OPEN SalesOrderDetailCursor;

 

-- Fetch first row to start loop

FETCH NEXT FROM SalesOrderDetailCursor

      INTO @SalesOrderID, @SalesOrderDetailID,

           @OrderQty, @ProductID, @LineTotal;

 

-- Process all rows

WHILE @@FETCH_STATUS = 0

BEGIN;

 

  -- Change OrderQty of current order

  UPDATE Sales.SalesOrderDetail

  SET    OrderQty = @OrderQty + 1

  WHERE  CURRENT OF SalesOrderDetailCursor;

 

  -- Fetch next row

  FETCH NEXT FROM SalesOrderDetailCursor

        INTO @SalesOrderID, @SalesOrderDetailID,

             @OrderQty, @ProductID, @LineTotal;

 

END;

 

-- Done processing; close and deallocate to free up resources

CLOSE SalesOrderDetailCursor;

DEALLOCATE SalesOrderDetailCursor;

 

-- Display duration

SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

go

 

-- Rollback changes for the next test

ROLLBACK TRANSACTION;

go

 

Just as with the tests that only read the data, there was no difference between SCROLL and FORWARD_ONLY cursors. And just as with the tests that only read the data, KEYSET cursors were consistently faster than their DYNAMIC counterparts. However, in this case the SCROLL_LOCKS locking option turned out to be consistently faster than OPTIMISTIC, though I expect that this might change if only a fraction of the rows is updated.

 

From a performance point of view, there is absolutely no difference between a generic FOR UPDATE or a completely specified FOR UPDATE OF column, column, … For documentation purposes, I would prefer the latter.

 

And again, just as with the tests that only read the data, the default cursor options chosen when I did not specify any turned out to select the slowest of all available options. Ugh!

 

However, the real kicker came when I left out the FOR UPDATE clause of the CREATE CURSOR statement and changed the UPDATE statement to use the primary key values instead of the WHERE CURRENT OF clause. One would expect that this clause would be fast – since it is written especially for, and can be used exclusively in, the processing of a FOR UPDATE cursor, every trick in the book can be used to optimize this. However, the reverse turned out to be true. Even the fastest of all WHERE CURRENT OF variations I tested was easily beaten by even the slowest of all WHERE PrimaryKey = @PrimaryKey variations. Here is the code I used, in case you want to test it yourself:

 

-- Enclose in transaction so we can roll back changes for the next test

BEGIN TRANSACTION;

go

 

-- Keep track of execution time

DECLARE @start datetime;

SET @start = CURRENT_TIMESTAMP;

 

-- Declare and initialize variables for cursor loop

DECLARE @SalesOrderID int,

        @SalesOrderDetailID int,

        @OrderQty smallint,

        @ProductID int,

        @LineTotal numeric(38,6);

 

-- Declare and init cursor

DECLARE SalesOrderDetailCursor

  CURSOR

    LOCAL           -- LOCAL or GLOBAL makes no difference for performance

    FORWARD_ONLY    -- FORWARD_ONLY or SCROLL

    STATIC          -- STATIC, KEYSET, DYNAMIC, or FAST_FORWARD

    READ_ONLY       -- READ_ONLY, SCROLL_LOCKS, or OPTIMISTIC

    TYPE_WARNING    -- Inform me of implicit conversions

FOR SELECT   SalesOrderID, SalesOrderDetailID,

             OrderQty, ProductID, LineTotal

    FROM     Sales.SalesOrderDetail

    ORDER BY SalesOrderID, SalesOrderDetailID;

 

OPEN SalesOrderDetailCursor;

 

-- Fetch first row to start loop

FETCH NEXT FROM SalesOrderDetailCursor

      INTO @SalesOrderID, @SalesOrderDetailID,

           @OrderQty, @ProductID, @LineTotal;

 

-- Process all rows

WHILE @@FETCH_STATUS = 0

BEGIN;

 

  -- Change OrderQty of current order

  UPDATE Sales.SalesOrderDetail

  SET    OrderQty = @OrderQty + 1

  WHERE  SalesOrderID = @SalesOrderID

  AND    SalesOrderDetailID = @SalesOrderDetailID;

 

  -- Fetch next row

  FETCH NEXT FROM SalesOrderDetailCursor

        INTO @SalesOrderID, @SalesOrderDetailID,

             @OrderQty, @ProductID, @LineTotal;

 

END;

 

-- Done processing; close and deallocate to free up resources

CLOSE SalesOrderDetailCursor;

DEALLOCATE SalesOrderDetailCursor;

 

-- Display duration

SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

go

 

-- Rollback changes for the next test

ROLLBACK TRANSACTION;

go

 

So from using WHERE CURRENT OF and default options, at 16.6 seconds, I’ve gotten execution time down to 5.1 seconds by using the primary key for the update and specifying a STATIC cursor (including the LOCAL, FAST_FORWARD, and READ_ONLY options for documentation). Looks good, as long as I close my eyes to the 0.4 second execution time of the set-based version:

 

-- Enclose in transaction so we can roll back changes for the next test

BEGIN TRANSACTION;

go

 

-- Keep track of execution time

DECLARE @start datetime;

SET @start = CURRENT_TIMESTAMP;

 

-- Change OrderQty of all orders

UPDATE Sales.SalesOrderDetail

SET    OrderQty = OrderQty + 1;

 

-- Display duration

SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

go

 

-- Rollback changes for the next test

ROLLBACK TRANSACTION;

go

 

Conclusion

 

If you have to optimize a cursor for performance, keep the following considerations in mind:

 

  1. Always try to replace the cursor by a set-based equivalent first. If you fail to see how, do not hesitate to ask in one of the SQL Server newsgroups.
  2. If you are really stuck with a cursor, then do NOT rely on the default options. They will result in the slowest of all possible option combinations
  3. If you think that the FAST_FORWARD option results in the fastest possible performance, think again. I have not found one single test case where it was faster than, or even as fast as, a STATIC cursor.
  4. Do NOT use the WHERE CURRENT OF syntax of the UPDATE command. Using a regular WHERE clause with the primary key values will speed up your performance by a factor of two to three.
  5. Do not rely blindly on my performance results. Remember, the one thing that is always true when working with SQL Server is: “it depends”.
Published Wednesday, November 21, 2007 2:15 AM by Hugo Kornelis

Attachment(s): Cursor comparison.xls

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Adam Machanic said:

How about the fastet possible cursor: One built using SQLCLR..?  A loop over a SqlDataReader is much faster, in my tests, than any T-SQL cursor.

November 20, 2007 8:01 PM
 

Hugo Kornelis said:

Note to all - I just spotted an annoying typo. At one place, I had written FAST_FORWARD where I actually meant FORWARD_ONLY.

This is now fixed.

November 23, 2007 8:22 AM
 

JJ B said:

Very informative!  I hardly ever use cursors, but if I ever have to in the future, this article would prove very useful.  Thanks for doing the research.  (I did read and understand the part about my mileage varying...)

November 26, 2007 11:46 AM
 

SELECT Hints, Tips, Tricks FROM Hugo Kornelis WHERE RDBMS = 'SQL Server' said:

After making my post on cursor optimization I received some comments that triggered me to do some further

November 27, 2007 5:47 PM
 

SELECT Hints, Tips, Tricks FROM Hugo Kornelis WHERE RDBMS = 'SQL Server' said:

Some problems can only be solved by brute-forcing every possible combination. The problem with such an

November 30, 2007 4:58 PM
 

kzimmerm said:

Thanks for the information.  I was able to come up with a somewhat faster sproc which utilizes a cursor.  I was able to squeeze out a better performing procedure with little effort.

Thanks again.

December 19, 2007 1:47 PM
 

Ben H said:

Execellent article, thankyou

July 9, 2008 8:01 PM
 

Anil Desai said:

Hi i want to use "CURRENT OF" in cursor which only print one column row by row.

Here i make one table named "Student" which contain columns (studentId,studentName).

Than i create one T-Sql code like....

---------------------------------------------------------------------

DECLARE mycursor CURSOR FOR

  SELECT studentId, studentName FROM Student

OPEN mycursor

FETCH mycursor

WHILE (@@FETCH_STATUS != -1)

BEGIN

select studentName from Student

where CURRENT OF mycursor

  FETCH mycursor

END

deallocate mycursor

------------------------------------------------------------------

is it possible to use "CURRENT OF" like above code?

Thanks..

July 29, 2008 3:28 AM
 

Akki said:

Hi Thanks a lot for this info.this was really useful.

i did get to know where and how to improve performance using cursor.

Thanks a lot

Akki

December 14, 2009 5:44 AM
 

Akki said:

Hugo,

I have a question. dos'nt this performance mainly  depends on the server space and other server related things ??

Akki

December 14, 2009 5:49 AM
 

Hugo Kornelis said:

Hi Akki,

Performanhce depends on many factors. Available memory, disk setup, amount of processors, and various other server options are all very important.

However, many of the options described in this article change the way SQL Server accesses the data. And differrent ways produce different speeds.

Bottom line: If you add memory to your server and buy a better disk subsystem, all cursors will speed up, but thosse with the "better" options will still perform better than those with the "slower" options.

One caveat - after posting this article, I found that there are cases where the FAST_FORWARD option is faster than the STATIC option. In situations where all the data to be processed by the cursor fits into the cache, STATIC always wins. In cases where the amount of data is way too large to fit into cache, FAST_FORWARD has the edge.

Best, Hugo

December 14, 2009 6:13 AM
 

Aaron Bertrand said:

This week, T-SQL Tuesday is being hosted by Jes Borland ( blog | twitter ), and the theme is " Aggregate

March 8, 2011 4:46 PM
 

Anonymous said:

Thanks, your article helped me and many others no doubt.

July 26, 2013 2:34 PM
 

Bheemsen Singh said:

Very useful....!!

Thanks a lot for this info.this was really useful.

I use cursors, but this article very useful.

May 14, 2014 2:47 AM

Leave a Comment

(required) 
(required) 
Submit

About Hugo Kornelis

Hugo is co-founder and R&D lead of perFact BV, a Dutch company that strives to improve analysis methods and to develop computer-aided tools that will generate completely functional applications from the analysis deliverable. The chosen platform for this development is SQL Server. In his spare time, Hugo likes to visit the SQL Server newsgroups, in order to share and enhance his knowledge of SQL Server.

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement