THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Adam Machanic

Adam Machanic, Boston-based SQL Server developer, shares his experiences with programming, monitoring, and performance tuning SQL Server. And the occasional battle with the query optimizer.

Who is Active v11.00 (A Month of Activity Monitoring, Part 27 of 30)

This post is part 27 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.

Thanks to your feedback over the past month, I’ve managed to get a lot of work done on the next version of Who is Active.

So much work, in fact, that I’ve finished a new major release.

Click here to download Who is Active v11.00

There are numerous enhancements in this release. In no particular order:

  • CPU deltas use-time thread-based metrics for more accurate data (use both @delta_interval and @get_task_info = 2)
  • command_type information added to [additional_info] column for active requests
  • Modified elapsed time logic to retrieve more accurate timing information for active system SPIDs
  • SQL Agent job info (job name and step name) is now included in the additional_info column (use @get_additional_info = 1)
  • If there is a lock wait, information about the blocked object (name, schema name, and ID) is now included in the additional_info column (use both @get_additional_info = 1 and @get_task_info = 2)
  • Service Broker activated tasks are now shown by default, without using @show_system_spids mode. The program_name column contains the queue_id and database_id associated with the activation procedure
  • Various numeric columns, including reads, writes, cpu, etc, have been made nullable. These will occasionally return NULL, on extremely active systems where the DMVs return data more slowly than queries start and complete
  • Query plans that cannot be rendered due to XML type limitations are now returned in an encapsulated format, with instructions, rather than sending back an error
  • Added wait information for OLEDB/linked server waits
  • Wait collection will now "downgrade" to get_task_info = 1 style data if no other information is available in get_task_info = 2 mode
  • Added header information to online help
  • Added a login_time column to the output
  • The duration for sleeping sessions is now the sleep time, rather than the time since login
  • Fixed various bugs

I’ve written about several of these things over the past month, and there are a couple of posts left for the remainder of the month, so I won't elaborate here.

Thank you, thank you, and thank you again to everyone who has taken the time to give me feedback and/or report bugs! Who is Active would not be what it is without you!

I would especially like to thank Paul White. Paul has acted as my unofficial "lead QA engineer" for the past few versions. He's found some very interesting bugs in my code, in addition to pointing out some quirks in the DMVs that I wasn’t handling properly. Thanks, Paul, for the great job you’ve done!

One other thing I would like to point out is an addition to the header: a donation link. Several people recently have asked me how to donate, and I didn’t have a good answer. Now, I do.

Use this link to support the Who is Active project!

While I feel strange asking for money, the truth is that I’ve invested well over a thousand hours in the stored procedure, documentation, and support (I respond to EVERY e-mail I receive from Who is Active users). If you’ve been able to improve your day-to-day performance thanks to Who is Active, it would be great if you—or even better, your employer—could help support my work on the project. I’ve listed a recommended donation of $40 per installed instance, but please donate however much you feel Who is Active is worth to you and/or your organization.

Thanks again, and enjoy!

Published Wednesday, April 27, 2011 11:00 AM by Adam Machanic

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Adam Machanic said:

UPDATE, April 28 2011: Who is Active v9.57 is outdated. Please use v11.00 instead. Happy December, SQLblog

April 28, 2011 9:02 PM

Adam Machanic said:

This post is part 30 of a 30-part series about the Who is Active stored procedure. A new post will run

April 30, 2011 11:45 AM

Ken said:

Hi Adam,

I've just begun looking at V11 and find one thing puzzling - deltas sometims return higher values than cumulatives - not expected??

I have just tried sp_WhoIsActive V11 on a busy system. Using @delta_interval most runs return *some* rows with delta values (e.g CPU_delta) higher than cumulative values (e.g CPU), which seems wrong. I then checked whether V10 behaves in the same way and it doesn't. I've made perhaps a dozen runs of both versions and V10 never returned deltas higher than cumulatives.

I used these options:

V11 @get_outer_command = 1,@find_block_leaders = 1,@get_additional_info = 1,@delta_interval = 1,@get_task_info = 2

V10 @get_outer_command = 1,@find_block_leaders = 1,@get_additional_info = 1,@delta_interval = 1

I couldn't use @get_task_info = 2 on V10 as it fails with: SELECT failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'.

May 9, 2011 10:13 AM

Adam Machanic said:

May 9, 2011 10:54 AM

Andrew Calvett View Andrew Calvett's profile on LinkedIn said:

Whilst reviewing the CPU statistics of a system that i knew was CPU bound i found the numbers were not

May 30, 2011 7:07 AM

Ricardocr said:

Thanks for this nice tool

But if possible could you please add a filter by session status? I do not want to see the 2000 sessions the this is returnung right now, I tried to modified to only show the sessions in "Runnable state" but it shows less than the really running sessions ( according to my sp_who3 ), also it would be really nice if we can sort by temp_db_allocations, I tried that column on the order filter byt it do not work....

Thanks again

June 15, 2011 5:43 PM

Adam Machanic said:

Ricardocr: I don't really see the point of filtering to see only queries reported as runnable, nor would I even be able to tell you how to properly implement such a filter. First of all, the information in the requests DMV pertains only to the root task. If you have a parallel query you may have a number of tasks, some runnable, some suspended, others running. Second, a query that is "really" running may be in any of three states, so such a filter would drop a number of queries at any given time, and since those statuses change every 4ms it would seem nondeterministic to anyone casually looking at the data. In short, I don't see this as a feasible idea.

Sorting by the [tempdb_allocations] column works fine. Just remove the extraneous underscore you've added.

June 16, 2011 10:08 AM

Richard said:

Hello, can u help me? I am trying your script to show data for DELTA times, but I always get ZERO values. Even if delta interval is 100 sec. I am trying it on heavily used system, so there is big activity everytime ...

My command:

exec dbo.sp_WhoIsActive @delta_interval = 100, @sort_order = '[cpu_delta] DESC'

Thank u

September 28, 2011 4:44 AM

Adam Machanic said:

Hi Richard,

A couple of comments:

A) If the activity on the system is "OLTP style" (lots of shorter queries) delta mode isn't applicable -- the same queries need to be running on both collections, and if you're waiting 100 seconds between collections that's certainly not going to be the case. (And in a well-tuned OLTP system, even 1 second is usually far too long.)

B) If the activity is more long-running queries then delta mode should work quite well, but if the queries are parallel there are some additional issues with certain columns, especially CPU time. In that case you might get better results by also enabling @get_task_info = 2.

Please share some more information on the situation. I'm happy to tweak, where possible, to make the proc work better in a wider variety of scenarios.

October 5, 2011 10:18 AM

Richard said:

Yes, the system is OLTP, hopefully well tuned ... so that't why delta doesn't work too well.

Is there any possibility to get stats for all users from one certain time ?

October 24, 2011 1:20 AM

MIlton said:

Hi thanks for the code, but im getting some errors, in this lines:

Mens 102, Nivel 15, Estado 1, Procedimiento sp_WhoIsActive, Línea 533

Sintaxis incorrecta cerca de '.'.

Mens 156, Nivel 15, Estado 1, Procedimiento sp_WhoIsActive, Línea 3697

Sintaxis incorrecta cerca de la palabra clave 'CASE'.

Mens 156, Nivel 15, Estado 1, Procedimiento sp_WhoIsActive, Línea 3735

Sintaxis incorrecta cerca de la palabra clave 'CASE'.

Its on spanish, so that means the errors on Lines 533,3697 and 3735

November 24, 2011 10:29 AM

Adam Machanic said:

Milton: Your database is set to the wrong compatibility mode (8.0). I covered this error in one of the earlier posts :-)


November 24, 2011 10:11 PM

Praveen said:

Hi Adam,

Thank you so much for this wonderful script. I had a chance to test your script on one of my lab system while we do most of our load testing. Under the peak load with CPU hitting 90% and above, this stored procedure takes up to 40 minutes to complete. Have you had a chance to test this stored procedure under peak load?



December 23, 2011 10:19 AM

Adam Machanic said:

Hi Praveen,

Yes, I've run it on a number of systems that were completely maxed out, and while I have seen run times of up to a minute in really serious cases, I've never seen anything close to 40 minutes except when trying to collect lock data. (@get_locks = 1) ... Were you by any chance using that option? The DMV that data comes from is ridiculously slow; nothing I can do about it.

If not, I don't know what to tell you -- except that you can try using the @get_task_info = 0 option to put the proc into its lightest-weight mode when you're running on a system where it's not especially responsive. Even on the systems where it's taken 10s of seconds, enabling that option usually gets me my results in one or two seconds.

December 23, 2011 10:13 PM

Luke Campbell said:

Adam, thanks for an awesome script!  Is there an easy way to add the object id of an executing stored procedure to the output from the sys.dm_exec_sql_text DMV?

January 10, 2012 9:55 AM

Adam Machanic said:

Hi Luke

I'm not sure it would be especially simple given how the code is structured. Can you give me some background on what scenario you need it for? I'm happy to consider it but it's not something that anyone else has requested. I'm wondering if there might be a different/better way to solve your problem



January 11, 2012 10:40 AM

Luke Campbell said:

Adam, I'm using this sp_whoISActive to find root blockers but would like to see the object name, if the executing query is within a stored procedure or function, returned.  Currently I'm inserting the results into a destination table, joining to sysprocesses on the session_id and then cross applying to sys.dm_exec_sql_text using the sql_handle returned from sysprocesses to get the objectid.  I'm wanting to only return the sql text that is currently executing but want to know which object it's contained in.  Thanks for your response!

January 11, 2012 11:47 AM

Adam Machanic said:

Hi Luke,

The easiest way to handle this in the short term is to use @get_full_inner_text = 1. Then you'll see the entire stored procedure -- including the name.


January 12, 2012 9:58 AM

Praveen said:

Thank you Adam. i will try with (@get_locks = 1) option and let you know.

January 23, 2012 3:34 PM

Adam Machanic said:

Praveen: You want to try it WITHOUT that option! @get_locks = 0 -- if you're having performance problems.

January 23, 2012 9:27 PM

Brandon Hedge said:

Thank you Adam!  This script is great and has saved us many times.  Just yesterday someone on our team had set off a session level trace with Quest's Spotlight tool.  I was able to see the TRACE WRITE blocks stacking up (over 200) and killed the offending SPID.  Most users never noticed the slow down.  Thanks again!

July 20, 2012 10:35 AM

Suresh said:

Will give this a try tmrw on oursqk 2k8 fast track dw server where users are complaining of perfect issues

October 18, 2012 11:22 PM

Suresh said:

Auto correct..) performance issues on our SQL 2k8 dw server

October 18, 2012 11:24 PM

Suresh said:

I am still working on this..

But I execute this query as well, which gives the query plan

SELECT mg.granted_memory_kb, mg.session_id, t.text, qp.query_plan

FROM sys.dm_exec_query_memory_grants AS mg

CROSS APPLY sys.dm_exec_sql_text(mg.sql_handle) AS t

CROSS APPLY sys.dm_exec_query_plan(mg.plan_handle) AS qp



This query

SELECT s.host_name, s.login_name,mg.*

FROM sys.dm_exec_query_memory_grants AS mg

inner join sys.dm_exec_sessions s

on mg.session_id = s.session_id

ORDER BY query_cost DESC

In my case, this is a dw server, I see more queries waiting for memory grant as day progresses.

Will be using your script as well to get some information such as duration, tempdb_allocations, read, writes etc.

Users claim that the server was running fine untill a few days back and now it is taking too much of a time to get the reports.

Trying to see what I can find and how I can help. Thanks for you tool.

October 19, 2012 10:51 AM

Adam Machanic said:

Hi Suresh,

Sounds like someone changed something. (Even if they deny it :-).) Do you have a change control process of some sort?

Either that, or stats have gotten out of date. I might start by simply updating stats on everything that seems impacted.


October 19, 2012 2:04 PM

Suresh said:

We kind of got it to from worst to better now. The server is a 8 physical cpu*6 hyper thread ratio, making it 48 logical CPU count with 256 GB physical memory, with sql set to about 240 GB. We were playing with Maxdop option a while back and set that to 24 then, when we made this change, it was ok, we thought. but it got progressively worse, I think. Now we changed it back to 8, as recommended in of the articles, and this seems to be helping a lot.  We will be revisiting this and may play more with this option.

Thanks for your toolk, I got it in my library and been using it more often to get immedieat information on queries such as reads\writes etc.

Thanks again.

October 25, 2012 12:28 PM

Suresh said:

And to your point on update stats: they are upto date.

And this is a pretty heavy server, with 12 TB data, spread across 48 Luns, and major big tables partitioned by date.  I do have on all the partitinoed tables (and non partitioned as well), dbreindex the last 1 or two partitions, partions created on a weekly basis, running every week. I should say these tables are pretty upto date on statistcs and <10 to 0% fragmenation, might find one off with bad fragmentation or statistics, but usually they are pretty small tables, with much less than 1000 pages. We are making sure this server runs ok this holiday season and getting rid of data older than 7 years, this should get down the size pretty drastically and with in managability.

October 25, 2012 12:39 PM

Brian Jensen said:

Hi, thank you for this very nice procedure :)

I think there may be a slight problem with datetimes (else I misconfigured something).

I was running the procedure as a SQL job, and it started failing with:

"Warning: Null value is eliminated by an aggregate or other SET operation. [SQLSTATE 01003] (Message 8153)  The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart. [SQLSTATE 22003] (Error 535).  The step failed."

during the evening (24H clock). As the time passed 0:00 it ran flawlessly again. I am sad to say that I do not have the correct time for when it started failing, but I believe it was around 21:30 I had the last entry (before truncating the logging table (yes I know...stupid).

Is this something you have seen before? Did I configure something wrong?

February 21, 2013 6:19 PM

Adam Machanic said:

Hi Brian,

Interesting. Do you have any really long-running sessions? I wonder if one of the internal checks was near the threshold or something...

February 22, 2013 9:26 AM

Tony Trus said:


Normally this sp runs without incident however I had it set on a phased 30 second poll to table over a series of a day to gather metrics and it ended up hitting the below error.  Versioning info I believe is 11.11.  


Msg 6841, Level 16, State 1, Procedure sp_WhoIsActive, Line 4194

FOR XML could not serialize the data for node 'Lock/@resource_description' because it contains a character (0x0001) which is not allowed in XML. To retrieve this data using FOR XML, convert it to binary, varbinary or image data type and use the BINARY BASE64 directive.

September 10, 2013 8:12 AM

Ram said:

I too received the below error when i was running this stored procedures as job. As mentioned by Brian Jensen there may be a slight problem with datetime. Nothing is running except the sql server spotlight monitoring. Server is just newly installed. Not sure what could cause this.

"Warning: Null value is eliminated by an aggregate or other SET operation. [SQLSTATE 01003] (Message 8153)  The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart. [SQLSTATE 22003] (Error 535).  The step failed."

December 16, 2014 2:10 AM

Adam Machanic said:


Please share some more information:

- What locale settings did you use?

- What exact version of SQL Server are you running?



December 16, 2014 9:24 AM

Ram said:

Hi Adam,

I'm using sql server 2012.

Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)   Oct 19 2012 13:38:57   Copyright (c) Microsoft Corporation  Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)

December 17, 2014 12:25 AM

Adam Machanic said:


More important is the locale setting -- i.e. what's the collation on the instance? Does Windows use a language other than US English?

Also: Does this ALWAYS happen?

December 17, 2014 10:05 AM

Ram said:

Hi Adam,

Here are the details.

Collation on the instance: SQL_Latin1_General_CP1_CI_AS

Does windows use a language other than US?

Windows server is configured to use English (United States) language and the format looks as below

Short Date: M/d/YYYY

Long Date: dddd,MMMM dd,yyyy

Short Time: h:mm:tt

Long Time: h:mm:ss tt

Does this always happen?

Not always. Intermittent. Last time it happend between 2014-16-2014 10:30 PM to 2014-16-16 11:45 PM



December 17, 2014 10:53 PM

Adam Machanic said:

Thanks again, Ram. I'll investigate. I'm pretty sure I know what the problem is.

December 18, 2014 10:03 AM

Ram said:


Did you find any fix on this. Thanks.

December 22, 2014 1:40 AM

Leave a Comment


About Adam Machanic

Adam Machanic is a Boston-based SQL Server developer, writer, and speaker. He focuses on large-scale data warehouse performance and development, and is author of the award-winning SQL Server monitoring stored procedure, sp_WhoIsActive. Adam has written for numerous web sites and magazines, including SQLblog, Simple Talk, Search SQL Server, SQL Server Professional, CoDe, and VSJ. He has also contributed to several books on SQL Server, including "SQL Server 2008 Internals" (Microsoft Press, 2009) and "Expert SQL Server 2005 Development" (Apress, 2007). Adam regularly speaks at conferences and training events on a variety of SQL Server topics. He is a Microsoft Most Valuable Professional (MVP) for SQL Server, a Microsoft Certified IT Professional (MCITP), and an alumnus of the INETA North American Speakers Bureau.

This Blog


Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement