THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

Join and Query hints – serious DBAs need to learn this

For a long time, I have refrained from endorsing the use of Join and Query Hints. The reason is that there is no such thing as query hints in SQL Server except for a few simple items. The key options listed under Microsoft SQL Server Hints (Transact-SQL) that affect join order, join type, and index usage are not really hints.

They are in fact directives such that the specified option is employed if possible. But join order optimization is essentially disabled, and the execution plan joins tables in the order as written, exception being EXISTS and NOT EXIST clauses. It is if the Query Optimizer is hyper-sensitive and its feeling are hurt when someone impudently suggests a hint. So if hint is specified, good luck to you. You are on your own.

Microsoft documentation is obviously very thorough on this matter with the following:
“Because the SQL Server query optimizer typically selects the best execution plan for a query, we recommend only using hints as a last resort for experienced developers and database administrators.”

But the fact is that there are complicated queries such that it is difficult to estimate the number of rows, especially past the source table into the intermediate steps. The query optimizer does (edit: not) produce a good plan, because the row estimate is grossly erroneous. This is aggravated by the fact plan costs employed by the query optimizer reflect a fixed IOPS model based around four 1995 era 7.2K hard disks, with no consideration for whether the actual capability of the storage system or if data is entirely in memory.

So there are times when it is necessary to override the query optimizer. I will not go into the details of query optimization here. Back around 2001, I reversed engineered the execution plan cost formulas for many the fundamental operations without bothering to explain the operations or other aspects of query optimization. Since then, Paul White, Benjamin Nevarez, and others have written very detailed papers on query optimizations. Roughly the strategy is a balance of 1) tables access efficiency, 2) join efficiency and 3) reduce rows as quickly as possible.

Because join order optimization is disabled when a join or query hint is specified, the key is to write the query to produce a specific join order. There is a method of doing so. See Inside Microsoft SQL Server 2008 T-SQL Querying: T-SQL Querying By Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass on the topic of bushy plans. Once the correct join sequence is found, most of the join types might already be good. Perhaps a few join hints might be necessary. Join and Query hints do no cause the query optimizer to disable index optimization, so index hints should not be needed as frequently.

This is a much deeper topic than I can discuss at this point in time. So why I am bring it up? Today there was a news report of the cause of the Air Asia flight 8501 crash last December. For details, see CNN Air Asia crash report or Wikipedia

In brief, there was an equipment malfunction that put the airplane outside of the normal flight envelope. It should have been possible for the flight crew to recover, but they did not do so.

It further turns out: “because the manual provided by the plane's manufacturer said the aircraft, an Airbus 320, was designed to prevent it from becoming upset and therefore upset recovery training was unnecessary.”

In our world, we are told the query optimizer “typically” is best, hence we should not have to do manual query optimization, hence we do not need to learn this. So when we do need to do manual query tuning, we do not know how because we did not learn this.

This is BULLSHIT! I don’t give a damn how good the query optimizer is “typically.” A serious principal DBA needs to learn how to do query optimization so that he/she can do so when the situation calls for it. The starting point is the bushy join style in the Itzik Ben-Gan book.

here is the link to my previous post on bushy joins, notice at the time I put caveats on it. Bushy Joins
I am not saying go for broke on join/query hints, just that we need to learn to shape the join sequence. Rob Farley had a great note in the comments. I had seen that query style before, in TPC benchmarks? without understanding at the time what the writer was smoking.

per Claus comment linking to Dynamic Search Conditions in T‑SQL, I especially like the strategy below, with hard code when good parameters are specified, then some thing else to handle the unusual, be it OPTION(RECOMPILE) or dynamically built parameterized SQL

IF @orderid IS NOT NULL
WHERE O.OrderID = @orderid
AND -- Conditions on Order Details here.
WHERE -- same conditions as before

Published Tuesday, December 1, 2015 8:11 PM by jchang

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Dave Wentzel said:

Joe, first, I love your posts.  I have never not learned something.  

You are absolutely right that learning how and when to properly use "advanced" query optimization tools is what separates the "senior" from the "junior" DBA.  I only ever needed to use a join hint once and I'm (mostly) sure I exhausted all other avenues before I resorted to doing it.  My colleagues still gave the usual criticisms and quotes from BOL to not do this.  Eventually I got my way with good results.  

Conversely, when MS released OPTIMIZE FOR UNKNOWN my colleagues decided to use this everywhere where symptoms pointed to a stats and parameterization problem.  Our app, an enterprise finance app, heavily used lots of "optional" parameters.  While OPTIMIZE FOR UNKNOWN marginally (and only occasionally) helped performance for larger customers, we found that smaller customers' performance got worse.  Basically different usage patterns.  All the while I was saying that we should LEARN (read KNOW) what to optimize for, why settle for UNKNOWN?  Soon after bloggers began reporting similar findings leading many to make pithy comments like "OPTIMIZE FOR UNKNOWN" is optimizing for mediocrity.  Point is...this is another case where if these "senior" people would've learned query optimization, or how OPTIMIZE FOR really works, maybe they would've realized that it isn't a panacea and maybe their query optimization journey wasn't yet complete.  There is no substitute for really understanding your database manager.  

December 1, 2015 8:44 PM

mark said:

I wish the join type hints would not force order. This is such an annoying property and destroys a lot of value. Hints should be surgical and allow for a lot of optimization freedom still.

For star schema plans forcing order usually results in the worst possible join order with the fact table becoming the build input to many hash joins. Terrible.

December 2, 2015 5:43 AM

jchang said:


Multiple optional search parameters. any developer who writes something like the following for important search arguments

WHERE ( @P1 IS NULL OR col1 = @P1)

AND (@P2 IS NULL OR col2 = @P2)

revoke his/her database license immediately. Transfer said person to customer support or help desk, better suited to his/her skills.


On stored proc compile parameters and OPTIMIZE for.

Why is it that the SQL Server decision makers are so pig headed? (yes this makes me many friends)

Just how hard is it to get it through your thick skull? or is there rock past the skull?

In the real world, not stupid benchmarks, what is the right strategy on a critical compile parameter?

I suggest allowing the developer to provide 3 parameter values, presumably for likely low, middle and high distributions.

(If the predominate high dist is never called, then the WHERE clause should have AND col1 <> 'high val')

Generate the plan for each, which should be the optimal plan for that distribution. Then evaluate how the plan for dist. A would work for dist B and C, and same for plan based on dist B and C.

If there is not one plan that is good for all, then this proc needs more than 1 plan.

I made a proposal on a strategy for implementing this,

Some idiot at SQL Server Connect came back with: just use WITH RECOMPILE

so I don't bother talking to those rocks anymore.


We could wish that the query optimizer would accept a "hint" and work with the hint instead coming back with "screw u"

Fundamentally, the heart of the matter is that most likely there was a very large row estimate error at some point. No matter what hint we provide, it will find a stupid plan because it is working with a seriously bad row estimate.

I once applied a HASH JOIN hint because the estimate was low and I knew it was high. The plan did the hash join, scanning a non clustered index for the join condition, then did a key lookup for the additional columns.

So I think the hint capability that we need to give is on the number rows not just at the table source, but also at some intermediate step/operation. I submitted this, and MS came back with OPTIMIZE FOR. No, I do not want to specify the compile parameter value, I want to hint the rows. F&CK!NG rock

Another possibility is that our storage system has very different random to sequential IO characteristics than the one SQL Server query optimizer uses (320 IOPS vs 10.8MB/s). For example, the FTDW reference system would do 200 IOPS vs 100MB/s per disk. So we should shift to a scan sooner. But it data was in memory, then we should stick to the loop join/key lookup longer.

Finally, the formulas SQL Server uses for parallelism are so bad that it is stupid. Why are we paying per core licensing when SQL Server is too stupid to use the cores correctly?

Perhaps I should have been a diplomat?

I might put together a course on Join and Query Hints, which would have heavy emphasis on writing the SQL to force a specific join sequence, i.e., bushy join.

The problem with expert level classes is that many beginners come, then write bad reviews, so I don't get invited back.

December 2, 2015 9:54 AM

Michael said:

From the post above...

Multiple optional search parameters.

WHERE ( @P1 IS NULL OR col1 = @P1)

AND (@P2 IS NULL OR col2 = @P2)

If this is bad, what is the preferred method of dealing with the problem?

December 2, 2015 10:30 AM

jchang said:

Michael, let me stress the above is so bad that the person who persists in writing that must be transferred to another job.

one of two options

IF / ELSE IF block works best when there are few optional parameters and even better if only one parameter is NOT NULL



SELECT col list FROM SQLLighthouse.DDL_Events

WHERE spid = @spid


SELECT col list FROM SQLLighthouse.DDL_Events

WHERE options = @options


when any number of parameter can be specified, then dynamically built parameterized SQL is best


DECLARE @SQL nvarchar(4000), @Parm nvarchar(4000) = N'@spid int, @options int, @client_net_address varchar(15)'

SELECT @SQL = N'SELECT col list FROM SQLLighthouse.DDL_Events WHERE (1=1) '


SELECT @SQL = @SQL + N' AND spid = @spid '

IF @options IS NOT NULL

SELECT @SQL = @SQL + N' AND options = @options '

IF @client_net_address IS NOT NULL

SELECT @SQL = @SQL + N' AND client_net_address = @client_net_address '

--PRINT @SQL -- yes I do debugging

exec sp_executesql @SQL, @Parm, @spid, @options, @client_net_address

personally, I prefer a hybrid, if only the first parameter

is specified


SELECT col list FROM TableA WHERE col1 = @P1


dynamically built parameterized SQL


the reason for this is that I want sys.sql_expression_dependencies queries to discover the table reference in each proc

presumably, an unfiltered search of a large table is not allowed.

December 2, 2015 11:13 AM

Wayne said:


That type of syntax is known as a "catch-all query". Gail Shaw (MCM/MVP) has an excellent blog post dealing with this at

December 2, 2015 12:55 PM

a.m. said:

Sorry, Joe, but your last comment is way off base.  For all but the highest throughput systems, OPTION (RECOMPILE) against a static query is by far the superior option.

The rest of your points are more or less spot on.


December 2, 2015 1:12 PM

jchang said:

Adam, if we have a stored proc that would

1) benefit significantly from a handful of different execution plans

2) for which the execution time is much less than the compile time of each plan,

3) and we have a means of distinguishing by parameter value or something which plan is correct,

then my assertion is correct

if this sounds far fetched in having too many conditions,

I could probably dig up and examples

the use of WITH RECOMPILE applies when the compile time is small compared to the execution time

December 2, 2015 1:59 PM

jchang said:

the example that comes to mind is something like this:



JOIN B ON B.Aid = A.Aid

JOIN C ON C.Cid = B.Cid

WHERE LIKE @P1 -- of form 'Tom%'

AND C.SiteId = @P2

there are indexes to support a query plan in either direction

sometimes the search argument for is selective

sometimes the one on C.SiteId is selective

there was something else in the query that made a recompile excessively expense but ultimately did not significantly impact the plan.

So which plan do you go for? either plan is good sometime and not others, WITH RECOMPILE always sucks.

I had an IF block, one for forcing the join order in each direction, with the criteria favoring the more selective argument.

I could have also made 2 sub-procs, with each being call depending on the arguments, so that compile parameters of 1 favor the first direction and the parameter of the 2nd favoring the second direction.

Technically, this problem should have been solved with an indexed view, but we could not track down all the INSERTs to apply the correct SET conditions.

December 2, 2015 3:09 PM

a.m. said:


WITH RECOMPILE and OPTION (RECOMPILE) are entirely different things with entirely different behaviors. You need to play with the latter before replying to this thread again.


December 2, 2015 3:48 PM

jchang said:

ok, I have a big complex stored proc, WITH RECOMPILE at the proc level will really suck


statement level OPTION(RECOMPILE) which could be much smaller than the whole procedure, depending on the situation.

So, the situation still applies

if the query execution time is expected to be less than statement recompile, you would like to reuse the plan(s)

otherwise, you can do the statement recompile

It is still stupid to think OPTION(RECOMPILE) solves all problems

December 2, 2015 4:08 PM

a.m. said:

You didn't take my advice to try it before typing your reply, I guess. That's a shame.

OPTION(RECOMPILE) does a lot more than WITH RECOMPILE. Does it solve all of the problems? Nope. Does it solve this one? *YES*, and quite well.


use tempdb


create table t (i int not null, j int not null)

create index i on t (i)

create index j on t (j)


insert t

select distinct number,number from master..spt_values


create proc get_t_1

@i int = null,

@j int = null

with recompile as

select *

from t


(@i is null or i = @i)

and (@j is null or j = @j)


create proc get_t_2

@i int = null,

@j int = null


select *

from t


(@i is null or i = @i)

and (@j is null or j = @j)





exec get_t_1 @i = 10


exec get_t_2 @i = 10




DROP PROC get_t_2

DROP PROC get_t_1




December 2, 2015 4:49 PM

jchang said:

you are only claiming that OPTION(RECOMPILE) generates the correct plan.

My assertion is that with any degree of query complexity, but where there are highly selective search arguments

the recompile will cost more than the execute,

regardless of whether the OPTION(RECOMPILE) produces the correct plan

December 2, 2015 7:55 PM

Dave Wentzel said:

"...any developer who writes something like the following...revoke his/her database license immediately. Transfer said person to customer support or..."

Ugh.  That's difficult for me to read considering you just revoked my license, but the fact is, are right.  In my defense, I only advocate(d) doing that because for some reason giant if blocks and near-identical copy/paste code never make it passed the code review board, which is stacked with Java Jocks that can't write sql (or maybe their sql keeps me employed fixing their garbage?).  Their reply is to use TVFs...ugh.  That's what I'm up against.  So then if I use "hybrid" dynamic sql I have to hear comments about sql injection and "dynamic sql is too hard for us to maintain and understand."  No win situation.  

Perhaps you'd consider giving me my license back if I tell you that I also did the "small/medium/large" trick and it WORKED...but that was a grueling code review too.

I'm certainly not an expert in the OPTION(RECOMPILE)/WITH RECOMPILE arena, and I have ZERO metrics/repro scripts to back this up...but anecdotally I can say that I've had *better* success with the former than the latter...but again, our procs were notorious for having 30 "optional parameters" certainly those anti-patterns should've been fixed before resorting to recompile tricks anyway, admittedly.  

My thanks to both of you for your contributions to the community.  

December 2, 2015 8:49 PM

jchang said:

OPTION (RECOMPILE) does work for the multiple optional SARG or catch-all per Gail. I tested it on 2008R2 SP3. statement recompile was introduced in 2005, but I am not when it worked for this issue. I recall hearing people discuss this and that there were issues in the beginning.

If you already have code like this, this is the easiest fix, but I am not happy about it.

I was called into a project for the US Army in 2007. It was SQL Server 2000 because 2005 was late CTP, and they followed the stupid advice of wait for SP1. They did not consider that 2005 RTM + sp1 would be out by 2007. Perhaps $200M was spent on it, and it was weeks from being cancelled for poor performance (of course the Army had a couple of projects that were supposed to cost $1B that went way late, so a mere $200M project failing would have been no big deal).

There was no time to do a comprehensive review.

I started fixing anything that looked broken. On the fourth day, I came across one had a the multiple optional SARG.

I put my fix in, the developer did not want my fix and said he would put in his own fix. So I worked on. On the 9th day, CPU was way down, but the critical path did not run any faster.

The project manager remarked that I thought I could just waltz in and wave away problems. This really aggravated me. I took another look, sure enough that stupid SQL was still there with some other irrelevant change. There was one other bad SQL, 2 function each referenced twice in the SQL for each row, and sub-functions inside. I reduced this to 2 functions per row period. So one the 10th day, it was clear that the core problem was resolved.

The problem should have been solved by the sixth day, an on the 7th I would rest (so I am big on biblical analogy).

There, no developers are permitted to write such SQL in my presence.

December 2, 2015 10:20 PM

a.m. said:


I think you're looking at things from the wrong angle. The recompile plus the good execute will almost always cost *significantly* less than the *bad execute* would have cost. It's almost never possible to code around every possible set of input permutations. That's why we have a query optimizer.

Recompiles are extremely cheap compared to bad plans, which is why I said that OPTION (RECOMPILE) is the best solution in all but the highest throughput systems. In those 1-2% type scenarios, yeah, the compiles really add up. But for systems pushing hundreds or low thousands of batches/sec, a few extra recompiles usually doesn't matter.

Generally these kinds of options are only needed on specific queries, not everything. For those cases, better to have a guarantee of a pretty good plan rather than have to fight fires when plans go to crap because you haven't coded around things properly in your quest for ultimate plan reuse.

It's all about balance. And I absolutely do not give you authority to revoke my database license! :-)


December 2, 2015 10:27 PM

jchang said:

If you want to say that if the recompile cost is low, then use the OPTION(RECOMPILE) because it is simplest, then that's fine. I do not agree that this works best in all but the highest throughput system, 1-2%. I have seen compile issues sink what I consider middle-high systems, perhaps in the 10% range. I recall one site where they were at 2000 compile/recompiles/sec - and yeah they had mysterious stoppages when the dead plan cleaner kicked in.

Besides, if you let developers follow simple rules without understanding the true reason, what happens when they move up from mom-pop to enterprise? it is hard to unlearn bad habits.

I never said to execute a bad plan. I said to do the dynamically built parameterized SQL, which does have some overhead, perhaps more than a trivial recompile, but definitely less than a complex recompile. Also, no matter how complicated, you should be able to figure out the most common, have a preset good plan, then let unhandled cases be done with OPTION(RECOMPILE)

All of this is off point of the post, which is the need to learn how to code to a specific join sequence, that coupled with understanding the formulas used by the query optimizer, will let you use join/query hints effectively. None of this can be done without precise knowledge, certainly not by blindly following simplified rules that obscure the true foundation. In other words, what to do when the query optimizer cannot do the job (for any number of reasons)

December 3, 2015 9:47 AM

Cartman said:

December 3, 2015 1:40 PM

Claus said:

This subject is treated well by Erland Sommarskog here

December 4, 2015 2:59 PM

jchang said:

(Santa?) Claus: Sommarskog is a stud!

4.8 When OPTION (RECOMPILE) Hurts You

and people say I am over analytical, heck I'm a hip shooter by comparison.

Cartman: nice,

December 4, 2015 6:20 PM

SnowwalkR said:

I agree with Cartman, and not only because I respect his Authoritah!

Nice thread. :)

December 7, 2015 5:21 AM

Mike Lawell said:

Great blog and Excellent repartee! Too fun!

December 7, 2015 5:37 PM

George said:


December 8, 2015 12:02 PM

George Walkey said:

"the recompile will cost more than the execute,"

Is that the case in this workload?

from Plan Cache:

Sort Queries by #Executions, #Compiles, #Recompiles, Duration

I recently went dumpster diving for performance on my top 20 CPU-killing queries and found the machine as a whole wasnt even breaking 1%

Sheesh on me

December 8, 2015 12:04 PM

Leave a Comment


About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog


Privacy Statement