THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Quick, dirty, but effective – finding top CPU consumers

Despite all the DMVs, the sysprocesses table still offers a significant amount of information for identifying top CPU or I/O consumers. A great advantage is that it is SQL Server version neutral (so far anyway) and convenient to get to.

 

So inevitably, you’ll be called to find the top CPU consumers in a SQL Server instance. You have a number of ways to accomplish that goal. For instance, you may have at your disposal one of the third-party GUI tools. Some of them are useful, but you may find your satisfaction level varies for one reason or another. A straightforward request of ‘show me the current top CPU consumers’ requires a straightforward answer. For some of these GUI tools, that seems to be a bit too much to ask.

 

Alternatively, you can query the DMVs directly and get really fancy. A problem with doing that is these fancy queries can be rather complex, and sometimes, too complex to get straight to the point. Now, I don't know about you, but from time to time I find myself having to look up Books Online to verify a column in a DMV or confirm the relationship between two DMVs. Now, you can overcome that problem with an excellent stored procedure created by Adam Machanic. The stored procedure is especially useful because it allows you to see the delta between two snapshots, which is what really matters in most cases. Cumulative CPU values over a long haul are close to useless in troubleshooting a CPU load problem.

 

But unfortunately, you almost certainly will run into situations where you can’t just create a stored procedure on a production SQL Server instance on the fly. Well, you still have your script collection with you, right? Nobody would prevent you from running your favorite DMV scripts for troubleshooting purposes, especially when you are helping them bust their top CPU consumers.

 

Even with a collection of fancy DMVs scripts, I often find it’s effective to use the following simple script to quickly locate top CPU consumers (or top I/O consumers).

 

drop table #tmp

go

select * into #tmp from sysprocesses

go

 

-- wait for a few seconds before running the next statement

 

select top 20

       t1.cpu - t2.cpu as cpu_diff,

       t1.physical_io - t2.physical_io as io_diff,

       t1.*

  from sysprocesses t1, #tmp t2

 where t1.spid = t2.spid

   and t1.login_time = t2.login_time

   and t1.ecid = t2.ecid

order by cpu_diff desc

go

 

Substituting order by cpu_diff desc in the script with order by io_diff desc gives you the top I/O consumers.

 

This script is quick and dirty because you can type it in a few seconds, and really don’t need to memorize anything (except that you need to know sysprocesses a bit, which I assume every SQL Server guy worth his salt should already know).

 

This script is effective because it has been battle tested in many places and over many years.

 

Of course, this doesn’t give you all the bells and whistles that come with fancier scripts or Adam’s stored procedure. Well, I don’t mean to suggest that they only give bells and whistles beyond what this script gives you, but you get the drift. Yes, you can start to add this and that to this quick and dirty script (such as adding a cross join with sys.dm_exec_sql_text to get the SQL statement). But that kind of defeats the purpose of being quick and dirty.

 

Of course, this script would not give you laser precision. It is not intended to.

 

But when you find yourself in a situation without a fancy pre-installed stored procedure or without your script collection with you, this little dirty script can come in real handy.

 

Published Friday, August 21, 2009 7:07 PM by Linchi Shea

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

AaronBertrand said:

As simple as you try to make the script, it will still be challenging to memorize.  You could always save it as a template (or as a snippet, if you use SSMS Tools Pack, Red Gate add-ins, etc).

August 21, 2009 6:28 PM
 

Adam Machanic said:

Hi Linchi,

I'm not sure whether login_time is a good enough ID for the request; I think last_batch might be a bit safer. Otherwise you can get situations where a user or app is submitting multiple batches. I guess it depends on the granularity you'd like for the results.

Thanks for calling out Who is Active!

August 21, 2009 9:05 PM
 

Linchi Shea said:

Hi Aaron;

Old habits die hard. I'm still using the saved script file method. Thanks for pointing out the templates.

Adam;

Yeah, it's a granularity issue. I found most of the time, using the last_batch column result in too many misses on the batch boundaries when the primary interest is in finding where the main load come from. Also, ecid is necessary in case of query parallelism.

August 24, 2009 12:38 PM
 

John Paul Cook said:

I have a minor suggestion - make the DROP TABLE conditional.

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'#tmp') AND type in (N'U'))

  drop table #tmp

GO

August 24, 2009 11:08 PM

Leave a Comment

(required) 
(required) 
Submit

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement