THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Page Free Space

See also my SQL Server technical articles on

An Interesting MERGE Bug

Summary: Investigating an optimiser transformation that exposes a bug in SQL Server’s MERGE implementation.

I came across a Connect item today (by fellow SQL Server Central member ‘ALZDBA’) describing how using a combination of relatively new features can produce incorrect results or even an access violation inside SQL Server.

Reproducing the bug

We’ll look first at how MERGE can produce incorrect an output.  We’ll need two tables: one that contains two rows of existing data; and a second that contains four rows of change information.  The overall process looks like this:



This is a classic MERGE (or “upsert”) requirement.  If the row already exists, the new value is added to it, otherwise a new row is inserted.  Here’s the code to generate the sample tables and data:



-- Existing records (1 & 3)
INSERT @Target
(row_id, value)
VALUES (1, 1),
(3, 3);

-- Change table:
-- Updates rows 1 & 3
-- Inserts rows 2 & 4
(row_id, delta)
VALUES (1, 10),
(2, 20),
(3, 30),
(4, 40);

We can easily write a MERGE query to perform the actions needed.  Let’s also include an OUTPUT clause to show the effects on each row:

MERGE   @Target T
USING @Delta D
ON D.row_id = T.row_id
UPDATE SET T.value +=
OUTPUT D.row_id,
DELETED.value AS old_value,
INSERTED.value AS new_value;

If we examine the @Target table after executing this query, we get exactly the results shown in the diagram.  All good so far.  The problem comes when we look at the rows produced by the OUTPUT clause:


When the action is ‘INSERT’, the new_value column contains an incorrect value.  Instead of reporting the value actually inserted into the target table, it repeats the value associated with the last UPDATE operation.  If we change the data so that multiple INSERTs are performed after the UPDATE, all the INSERTs get the same new_value associated with the prior UPDATE.  The @Target table always contains the correct data – it’s just that the OUTPUT clause disagrees.

The Query Plan

The original Connect item states that this issue only affects table variables, but there’s more to see here.  Let’s start by taking a look at the execution plan for the sample MERGE statement above (click to enlarge):


I have omitted several Compute Scalar iterators to make it easier to see what’s going on.  There’s a scan of the @Delta table, followed by an Insert to the @Target table, and finally a Merge into the @Target table.  If you are familiar with the sort of plans MERGE normally produces, you might find that arrangement a bit odd.  Even if MERGE is new to you, you might think that an Insert followed by a Merge is surprising.



In a normal MERGE, we would expect to see an outer join between the source and target tables, followed by a Merge:


SQL Server performs an outer join so the Clustered Index Merge knows whether the current row already exists in the target table or not.  If it exists, it is updated, otherwise a new row is inserted.

If you’ve been following my recent series on optimiser internals, you might suspect that the optimiser has applied a transformation to get the join-less plan, and the two plans are somehow logically identical.  You would be right – the rule in question is LOJPrjGetToApply (Left Outer Join, Project Get to Apply).

The Optimisation

The idea behind the optimisation is to read a row from the @Delta table and immediately try to insert it into the @Target table.  If that succeeds, the row didn’t already exist, and no further work is required.

If the row did exist, the PRIMARY KEY on @Target will cause a duplicate key violation to occur.  The relational engine suppresses that error, and execution continues on down to the Clustered Index Merge iterator, which performs the necessary UPDATE.

This is a very useful (and perfectly safe) query optimisation if the conditions are right.  There are rather a lot of conditions, the most important of which are:

  1. The target of the merge must have a suitable clustered unique or primary key constraint
  2. The changes (delta) table must have some sort of unique constraint (can be non-clustered) on the join column
  3. The plan must be already using a nested loops join
  4. The MERGE must include a NOT MATCHED condition plus at least one MATCHED clause
  5. Equivalent columns in the two tables must have exactly the same type – no implicit conversions allowed
  6. There can be no chance of a constraint violation – including NOT NULL constraints
  7. The query must not update the clustered key

If any of the required conditions do not apply, the optimisation is not performed, and the query plan retains the outer join.

In order for the MERGE to return incorrect results:

  1. The MERGE target must be a temporary table (the delta table can be anything)
  2. The optimiser must produce a final plan that includes the transformation described above
  3. The MERGE statement must include an OUTPUT clause that references the inserted pseudo table
  4. The query has to perform an INSERT row action after an UPDATE (in clustered index order)

Importantly, it is only rows that result in an INSERT that cause problems.  If only UPDATE or DELETE operations result from the MERGE, everything works perfectly – even with a table variable as the target.

Producing an Access Violation

The example data given for the @Delta table was carefully constructed to ensure that incorrect results were returned.  With different sample data, we will get an Access Violation (AV) which terminates the connection and produces a stack dump on the SQL Server.

The simplest way (based on the test rig above) is to change the @Delta table contents to just include a single row that will result in an INSERT:

INSERT  @Delta
(row_id, delta)
VALUES (2, 2);

There is no row in @Target with row_id = 2, so this results in an INSERT.

To expand our test rig to encompass DELETE operations, we need to add an extra condition to the original MERGE statement.  The new code will delete a record if its value is zero after applying the deltas:

MERGE   @Target T
USING @Delta D
ON D.row_id = T.row_id
AND T.value = -- New --
UPDATE SET T.value +=
OUTPUT D.row_id,
DELETED.value AS old_value,
INSERTED.value AS new_value;

With that modification in place, the following values in the @Delta table will update the row with row_id = 1, and delete the one with row_id = 3:

INSERT  @Delta
(row_id, delta)
VALUES (1, 10),
(3, -3);

This combination works correctly because there is no INSERT operation.  Any number of UPDATEs and DELETEs can be performed without issue, so long as no INSERTs occur.


The easiest workarounds involve preventing the query optimisation from happening in the first place.  This is quite easy since there are so many conditions for it to apply.  One way is to prevent the plan from using a nested loops join with an OPTION (HASH JOIN, MERGE JOIN) hint.  There are many other alternatives in the same vein, of course.

A second workaround is to apply Trace Flag 8758 – unfortunately this disables a number of optimisations, not just the one above, so it’s not really recommended for long term use.

Third, we could turn off the LOJPrjGetToApply rule (again, see my previous optimiser posts) but that could also negatively affect other plans that benefit from the optimisation.

Final Thoughts

This isn’t a bug that will affect everyone, but it does show that your chances of turning up a bug increase as you push the limits of the optimiser and fairly new features like MERGE.

It will be interesting to see how the Connect item goes: will Microsoft fix this, or will they just say that MERGE was never intended for use with table variables?  We’ll have to wait and see.

My thanks to ‘ALZDBA’ for reporting this problem.  Personally, I see the prospect of returning incorrect data from the OUTPUT clause as more serious than the access violation.  The OUTPUT clause is often used to record audit information, so producing incorrect data with no error seems undesirable.  I would encourage you all to form your own view on this bug, and vote on it accordingly.

© Paul White
twitter: @SQL_Kiwi

Published Wednesday, August 04, 2010 7:30 AM by Paul White
Filed under: ,

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



ALZDBA said:

Thanks Paul.

Very, very nice plan and bug analysis.

August 4, 2010 5:36 AM

Paul White said:

You're very welcome :)

August 4, 2010 2:16 PM

JOO said:

Thanks ~

April 7, 2014 5:15 AM

Leave a Comment

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement