THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Michael Zilberstein

Query performance troubleshooting in SQL Server 2008: query_hash and query_plan_hash

Recently I have noticed 2 new columns added to sys.dm_exec_query_stats and sys.dm_exec_requests DMVs in SQL Server 2008: query_hash and query_plan_hash. Those columns can greatly enhance performance monitoring process. In SQL 2005 main query I'm using for query performance monitoring is:

qs.total_physical_reads + qs.total_logical_reads + qs.total_logical_writes) AS [Total IO],
qs.total_physical_reads + qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO],
SUBSTRING(qt.[text], qs.statement_start_offset/2, (
WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) *
ELSE qs.statement_end_offset 
END - qs.statement_start_offset)/
) AS query_text,
sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt
OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp

It returns top 10 heaviest queries by average IO that exist in cache. I won't go deeper discussing plan cache memory pressure conditions that can force query plan out of cache thus preventing its detection. Let's just say that it works right in 99% of cases which is good enough. Memory pressure can be detected by other queries.

Which queries would be candidates for tuning? First, those with highest total IO. Second, queries with highest average IO that pass certain minimum number of executions criteria (we usually won't tune query, even the heaviest one, that runs once a month in some offline batch). As a side node - you're probably asking, why do I ignore CPU counters like total_worker_time. The reason is simple: in sys.dm_exec_query_stats this counter is unreliable. It shows incorrect numbers in case of parallel execution.

So, Houston,  do we have a problem here? Unfortunately, we do - when application that works with the database doesn't make proper usage of parameterization (and we don't want to force parameterization via database level setting). In such a case we'll see lots of similar queries with 1 or 2 in execution_count and different values of should-be-parameters in query_text. We can miss such queries because every single one is not heavy enough to be of interest or because many of the queries aren't in cache anymore pushed out by new queries - even of the same type. It is especially realistic scenario for 32 bit systems where entire non-data cache is limited to 1GB of space.

What are our options with poorly parameterized queries? We can use CLR user-defined function provided by Itzik Ben-Gan in his book "Inside SQL Server 2005: T-SQL Querying" that uses regular expressions functionality in order to parameterize query text (this function is widely used for Profiler trace analysis). Query text can be passed through the function and used as grouping column. But even if we don't count performance and CPU price of grouping by text column, I know several organizations that just won't let you create your objects in their database.

Taking all the above into account, I was delighted to find out that in SQL 2008 Microsoft added query_hash and query_plan_hash columns to sys.dm_exec_query_stats DMV. query_hash would be the same for queries with similar logic, query_plan_hash would be the same for queries with similar execution plan. And what's the difference? For column with uneven data distribution, execution plan can be different depending on parameter value. If we have the same value in 90% of a table, Optimizer would sure choose scan option. For another value which is responsible for 0.1% of rows Optimizer will prefer index seek with key or RID lookup depending on table's structure. For those two queries we'll see the same query_hash but different query_plan_hash.

So new SQL 2008 version of the query is:

;WITH CTE(TotalExecutions, [Total IO], [Avg IO], StatementTextForExample, plan_handle, QueyHash, QueryPlanHash)
SUM(execution_count) AS TotalExecutions,
SUM(total_physical_reads + total_logical_reads + total_logical_writes) AS [Total IO],
SUM(total_physical_reads + total_logical_reads + total_logical_writes) / SUM(execution_count) AS [Avg IO],
MIN(query_text) AS StatementTextForExample,
MIN(plan_handle) AS plan_handle,
query_hash AS QueryHash,
query_plan_hash AS QueryPlanHash
SUBSTRING(qt.[text], qs.statement_start_offset/2, (
WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) *
ELSE qs.statement_end_offset 
END - qs.statement_start_offset)/
) AS query_text
sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
WHERE qt.[text] NOT LIKE '%sys.dm_exec_query_stats%'
) AS query_stats
GROUP BY query_hash, query_plan_hash
TotalExecutions, [Total IO], [Avg IO],
tp.query_plan AS StatementPlan,
QueyHash, QueryPlanHash
OUTER APPLY sys.dm_exec_query_plan(plan_handle) AS tp

Another great usage of new columns is to create repository and monitor execution plan changes over time, i.e. changes in query_plan_hash for the queries with the same query_hash value. In SQL 2005 such monitoring was pretty complicated and required on-the-fly parameterization. In SQL 2008 it looks pretty straightforward.

Published Tuesday, September 09, 2008 6:52 PM by Michael Zilberstein
Filed under: , ,

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Leo Pasta said:


 Indeed those two columns are an excellent addition to any DBA's toolset. Nevertheless, I would like to let you know that currently it return 2 different hashes for the same query if the query uses temporary tables. According to Bart Duncan, at, this should be fix in future Service Pack / Cumulative Updates.

September 9, 2008 4:46 PM

Uri Dimant said:

Michael, shalom

>Which queries would be candidates for tuning? First, those with >highest total IO.

I think tge execution count is matter as well. It you have two queries with high total/avg IO but execution count is differ, you first need to take a look at those queries with high IO,

just my two cents

September 10, 2008 12:07 AM

Michael Zilberstein said:

Hi Leo,

Thank's for the link. As a matter of fact, I've already found Bart Duncan's  blog post - after posting mine. As for temporary table limitation - we should remember that what is limitation / bug in SQL 2008 is the "by design" situation in SQL 2005 :-)). In SQL 2005 I couldn't entirely automate the process - after noticing some unparameterized query, I had to play with "LIKE" on query text in order to get the cumulative cost - now it'll be so only for queries that involve temporary tables. All the rest can be automated which is great. If after cumulative update Microsoft fixes temp tables issue, the system can become 100% automated.

September 10, 2008 4:10 AM

Michael Zilberstein said:

Uri, shalom

If you re-read "First" and "Second" - I think, we're talking about same thing here. Of course execution count matters - that's the reason my query returns it first thing :-)). When ordering by average IO, I usually won't look at a queries with small execution_count. When ordering by total IO, execution_count is usually relatively high - in busy systems you have to be especially imaginative to write a query that in a single run will eat more IO than queries that run tens of thousands of times. That's not the case usually. So when ordering by total IO, you can be pretty sure that top queries will be first in the "candidates for tuning" list.

September 10, 2008 4:18 AM

Leave a Comment

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement