THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Buck Woody

Carpe Datum!

My Performance Tuning Methodology

I was speaking with some folks yesterday about tuning a system, and they asked me about my methodology. As time goes on, I find myself doing things differently based on the situation, but there is a general approach I use that I'll explain here. This isn't a hard-and-fast rule; it's just a guideline I use to narrow in on the problem.

I start with three general areas:
    1. Component Pressure
    2. Waits and Queues
    3. Top Processes

What I do first is to set up Performance Monitor (AKA System Monitor) and evaluate the general counters around CPU, Memory, I/O and Network. I watch which ones are showing the most activity (pressure) and then fine-tune the collection to know more about what part of that component is showing the issue.

Next I examine the Wait States and the Queues (using various DMV's and so on)  that tell me what the system is waiting to do.

Finally, I watch the top five or ten processes or queries that are running on the system, again using Perfmon and DMVs.

When I'm done with the collection, which might take a few minutes to a few days, I create a list of each of those items. There will be some that are in common, such as the I/O is causing the system to wait, which is caused by a long-running INSERT operation, and then I deal with those. Then I go after the things that were in one list (like the CPU that was running a little high for too long, for instance) but weren't in another (no single query waiting on this). At some point you're just re-arranging the deck chairs, so you tune until the system is behaving and then examine the architecture to see if there are any longer-term changes that need to be made.

Of course, none of this is optimal - it's far better to extablish a baseline of information on your system and then when you experience an issue you can just compare to that. But most shops I visit don't have that, so this is the next best thing. Comments welcome.

Published Thursday, July 01, 2010 8:50 AM by BuckWoody

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Duane Haas said:

thanks for the great information, would you mind sharing your dmv's that you use along with the thresholds you watch the counters to exceed when determining them to be a concern

July 2, 2010 9:24 AM
 

Roger Reid said:

Especially on systems running older client apps, I am finding this is too "micro" a level.  Hardware, OS, and engine improvements create a situation where app side code that broke down complex queries into chunks suitable for single CPU systems with slow IO and LANs can strangle a server.

This is hard to convince apps people and management of.  But these formerly needed breakups of 7-way joins can pretty well lock up a system with 8 cores and multipath I0.  The formerly "slow" stuff is happening so fast, that the engine spends all its time moving far too many threads on and off the run or wait queues.

Each query is ultra efficient.  But if you are looping on results, and at various conditions, going back for details, you can kill the server.

It was absolutely necessary once.  It's deadly now; it's hard to convince app developers that a faster machine makes their code slower - until you demonstrate the difference in "unlooping" it and making it back into a 7-way join (with subqueries, if you like).

The current engines are stunning on modern hardware, but being a "what, not how" way of programming, bigger faster machines don't automatically make the app faster.

July 6, 2010 12:50 PM

Leave a Comment

(required) 
(required) 
Submit

About BuckWoody

http://buckwoody.com/BResume.html

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement