THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Elisabeth Redei

  • Lies, Damned Lies and Statistics – Part III (SQL Server 2008)

    In previous posts (http://sqlblog.com/blogs/elisabeth_redei/archive/2009/08/10/lies-damned-lies-and-statistics-part-i.aspx  and http://sqlblog.com/blogs/elisabeth_redei/archive/2009/03/01/lies-damned-lies-and-statistics-part-ii.aspx)I have talked about the performance problems that may surface because of low statistics sample rate on large tables.

    One of the things I wrote about are issues with naturally skewed data in an indexed column. In this context, skewed data means that some values are represented more than others. One example is a "Country" column in an Address table where an international company is more likely to have more customers or employees in the U.K then let’s say Finland. A “Lastname" column in a Customers table would typically have skewed data as well – there are a lot more Smiths around than Rédei for example . Your SalesOrder table is bound to have more entries for some customers than for others as well.

    The solutions I listed -  that have been listed by many before me - were never really appealing to me and consequently I was never good at persuading anyone to use them and this has always annoyed me.

    SQL Server 2008 offers a great solution to this problem - filtered statistics. As you probably know, SQL Server 2008 offers you the ability to create filtered indexes, i.e. the index is created on a horizontal partition of the column. But did you know you can create filtered statistics on an unfiltered index?

    This solution is much more appealing simply because you implement it at the index level rather than individual query level. In addition, your data will change and this solution can easily adopt to those changes. Another attractive perk is that it would be quite difficult to mess things up with this approach – unless you make a serious effort to!

    Jason Massie has touched on the subject on SQLServerpedia, http://sqlserverpedia.com/blog/sql-server-bloggers/filtered-stats-to-counter-data-skew-issues/, but I wanted to see how it worked if you filtered on ranges of data covering all values in the index.

    Again, I have a table, test1, with with values in the c2 column that ranges from 1 to 500.I have created a non-clustered index Idx1 on column c2.

    In each bucket, I have about 2 400 rows, but c2 = 45 returns 0 rows:

    SELECT c2, COUNT(*) as NumRows
    FROM test1
    group by c2
    order by c2

    Fig1_NumRows2400

    DBCC SHOW_STATISTICS ('test1','idx1') WITH HISTOGRAM reveals that the value 45 has fallen between the chairs, even after an UPDATE STATISTICS test1 WITH FULLSCAN:

    DBCC SHOW_STATISTICS ('test1','idx1') WITH HISTOGRAM:

    Fig3_ShowStatistics2400

    DBCC SHOW_STATISTICS ('test1','idx1') WITH STAT_HEADER shows that all 1 190 402 rows are represented by 200 “buckets” or ranges (the Steps column):

    Fig3a_ShowStatisticsStatHeader2400 

    Consequently, the following query gives me a table scan because the statistics that the optimizer have at hand, implies that 2 400 rows will be returned:

    SELECT c1, c2 FROM test1 WHERE c2 = 45

    Fig4_ExecPlan2400 


    So the problem is not the sample rate; SQL Server has looked at all values in the index when it was building the statistics – it’s just that the histogram in this case is to coarse for the data it represents.

    How can I solve this problem?

    The trick is to increase the number of ranges that represent the data so rather than having 200 to represent 1 200 000 rows you can have 600 or a 1000.

    To start with, I need to know my domain of values:

    SELECT MIN(c2) AS minVal, AVG(c2) avgVal, MAX(c2) maxVal FROM test1

    Fig4_Ranges

    I have only some 1 200 000 rows in this table, so I am going to go ahead and try with 3 ranges (ideally you should have an idea how many rows are in each range):

    CREATE STATISTICS testStatLow
        ON test1 (c2)
    WHERE c2 < 200
    GO

    CREATE STATISTICS testStatMed
        ON test1 (c2)
    WHERE c2 > 200 AND c2 < 400
    GO

    CREATE STATISTICS testStatHi
        ON test1 (c2)
    WHERE c2 > 400
    GO

    I am then going to update the statistics with some sample rate to see if I - in addition to getting better query plans – I can save time on my maintenance job that updates the statistics for all tables:

    UPDATE STATISTICS test1 WITH SAMPLE 20 percent

    And then look at the statistics:


    dbcc show_statistics ('test1','testStatLow') with STAT_HEADER

    Fig7_Stat_header2400

    Rows sampled vs. Rows shows that a little bit more than half of the rows were scanned (remember that SQL Server can decide to increase the sample rate if it decides that it is necessary to get a good representation of the data). The data in the column has been divided into yet another 194 buckets/ranges, drastically reducing the risk for problems with skewed data.

    Let’s run the query again to see if my new filtered statistics makes a difference:

    SELECT c1, c2 FROM test1 WHERE c2 = 45

    Fig6_ExecPlan2400

    There we go, the optimizer chose to do an index seek to find my non-existing row. Much better!

    What about maintenance for the statistics?

    My idea was that perhaps I can get away with a lower sample rate now that the index has been logically divided into several histograms. And as you just saw, I can for this particular scenario. However, after an update, I ran:

    UPDATE STATISTICS test1 WITH SAMPLE 20 PERCENT

    … which took about 6 seconds whereas

    UPDATE STATISTICS test1 WITH FULLSCAN

    .. took about 1 second.

    So in my somewhat constructed scenario, my gain is with the performance of the query but I am not saving on the precious time I have available for doing index maintenance.

    Hm… I can sense an upcoming blog on the subject of the Long and Winding Shortcut!

  • Lies damned lies and statistics - Part II

    Lies damned lies and statistics - Part II

    In previous post I listed situations where your statistics might need some special attention (http://sqlblog.com/blogs/elisabeth_redei/archive/2009/03/01/lies-damned-lies-and-statistics-part-i.aspx).

    As I mentioned in that post, the symptoms are typically related to more or less random variations in duration that cannot be explained by resource contention or blocking.

    A quick recap of the various problems you can encounter:

    1. Statistics is out-of-date

    2. The default sample rate (used by the Autoupdate Statistics feature) may not be enough because it gets relatively lower as the tables grow larger (to avoid issues with Autoupdate statistics taking too long).

    3. Data in the column is naturally "skewed" (e.g. LastName; Smith vs. Redei for instance)
    4. The Autoupdate Statistics process interrupts your user activity

    In the last post I described how you can diagnose and alleviate the first two problems, in today's post I will talk about the third – skewed data and/or skewed statistics.

    Skewed Data

    First I will show you a repro that demonstrates the problem.

    I created and filled a table with 600 000 rows, in buckets of 1000 each and with a non-clustered index on the second column

     

    CREATE TABLE test1

    (

    c1 tinyint

    , c2 smallint

    )

    GO

    CREATE INDEX idx1 ON test1 (c2)

    However, I made sure that the value 500 does not exist at all:

    clip_image001

    clip_image002

    clip_image003

    Fig 1. SELECT c2, COUNT(*) AS cnt FROM test1 GROUP BY c2 – note the missing 500.

    If we look at the histogram of the statistics, because of how 500 falls between the steps, it looks like there are 1000 rows with the value 500 in the c2 column:

    DBCC SHOW_STATISTICS ('test1', 'idx1') WITH HISTOGRAM

    clip_image004

    Fig 2. 500 falls in the range where 501 is the highest key value (RANGE_HI_KEY). AVG_RANGE_ROWS is for this – and all other ranges,1000 rows.

    This is the information that the Optimizer will use when estimating how many rows will be returned by any operator in the query plan. Let’s look at the problem in action.

    -- Slow

    DECLARE @mytime DATETIME

    SET @mytime = GETDAT()

    SELECT c1, c2 FROM test1 WHERE c2 = 500

    SET @mytime = GETDATE() - @mytime

    SELECT @mytime

    clip_image005

    Fig 3: The query returns 0 rows and executes in 253 ms.

    clip_image006

    Fig 4. Showplan output, there is a big discrepancy between the Estimated and Actual number of rows.

    This query returns 0 rows but SQL Server chooses a Table Scan because the statistics implies that 1000 rows will be returned and if it didn’t do a Table Scan it would have to do a lookup for an Estimated 1000 rows to get c1 from the table data (the heap). If c1 was not included in the SELECT list, it could just scan the (leaf level of) the non-clustered index to satisfy the query:

    SELECT c2 FROM test1 WHERE c2 = 500

    clip_image007

    Fig 5. Because all data can be found at the leaf level of the non-clustered index, the query results in an index seek.

    If we force an index seek, the query executes much faster as well:

    DECLARE @mytime datetime

    SET @mytime = getdate()

    SELECT c1, c2 FROM test1 WITH (INDEX = idx1) WHERE c2 = 500

    SET @mytime = getdate() - @mytime

    SELECT @mytime

    The index hint is honored and the query runs a lot faster:

    clip_image009

    Fig 6.

    clip_image010

    Fig 7.

    Let’s try another query:

    DECLARE @mytime datetime

    SET @mytime = getdate()

    SELECT c1, c2 FROM test1 WHERE c2 = 502

    SET @mytime = getdate() - @mytime

    SELECT @mytime

    clip_image011

    Fig 8: A Table scan is chosen because it is in this case the best option since the 1000 Estimated rows equals the 1000 Actual rows returned.

    clip_image012

    Fig 9. The table scan takes 287 ms to execute.

    Let’s force an index hint for this last query, just to prove the point:

    DECLARE @mytime datetime

    SET @mytime = getdate()

    SELECT c1, c2 FROM test1 WITH (INDEX = idx1) WHERE c2 = 502

    SET @mytime = getdate() - @mytime

    SELECT @mytime

    clip_image013

    Fig 10. When forcing an index seek, they query takes much longer, proving that a Table Scan is the right choice if a 1000 – or more – rows are expected to be returned.

    How do you diagnose problems related to statistics granularity and/or skewed data?

    This is a little bit tricky. You need to find queries that contain operators with a big discrepancy between EstimatedRows (how many rows SQL Server expects the operator to return) and ActualRows (number of rows that were actually returned) as reported in the Actual execution plan.

    If you have a single query, and the query is simple enough, it is relatively easy. In SQL Server management studio, choose “Display Actual Execution Plan” from the Query menu or simply press CTRL-M and then run your query. If you hover over the icons (representing operators) the pop-up will present both EstimatedRows and ActualRows as in Fig 4, Fig 5 and Fig 8 above.

    But if you have some workload you need to investigate, this method will be somewhat cumbersome. You already know that the plan is accessible via DMVs but even if none of the issues with sys.dm_exec_* DMVs that I described in "How to get High Quality Data about Query Performance" (http://sqlblog.com/blogs/elisabeth_redei/archive/2009/03/01/how-to-get-high-quality-information-about-query-performance.aspx) apply, you still cannot use it because sys.dm_exec_query_plan contains the compile time representation of the query plan which means it will only contain estimates and not actual values for rows returned.

    Again, Profiler (or the corresponding sp_trace* procedures) comes to rescue (Gail Shaw has written an excellent post on how to use the sp_trace procedures on simple-talk.com, http://www.simple-talk.com/sql/performance/finding-the-causes-of-poor-performance-in-sql-server,-part-1/). I will do it using the graphical interface.

    Setting up The Profiler Trace to Capture Execution Plans

    Start Profiler with a new trace and choose a blank template. On the Event Selection tab, click "Show all columns" and "Show All Events" and choose only the Performance: Showplan XML Statistics Profile event. Click "Column Filters" and add any relevant events, such as ApplicationName or maybe you are only interested in traffic from a specific login; LoginName.

    Then move on to the Events Extraction Settings and click "Save XML Showplan events separately". Once you have specified the path, you can choose "All XML Showplan batches in a single file".

    The events will end up in a file with the .sqlplan extension. You can double-click this file and it should open up in SQL Server Management studio. At this point you can hover over the operators just as would you do over a graphical execution plan generated in SQL Server Management studio.

    But let's pretend we have a little bit more than the 4 queries I issued above. I want to find out which indexes have statistics that leads to large discrepancies between row estimates and actual rows return.

    The attached script, GetEstimateVsActualRows.sql*, give the following output for the 4 queries I have issued in the article:

    clip_image014

    Fig 11. Estimates vs. Actual

    The output reveals that the same index (or rather statistics) sometimes gives correct estimates and sometimes very misleading estimates for Idx1 (the Diff column).

    Now there are two reasons why this could happen.

    1. The problem I am describing in this post – skewed data or problems with statistics granularity
    2. The plan for a particular stored procedure (or any reusable plan) has been cached with an atypical value – or the opposite, it has been cached with typical values and suddenly an atypical value is passed as a parameter to the procedure

    I am not going to talk about the second scenario in this post (but I might do in later a post J ).

    What can you do if you encounter this problem?

    Several things actually:

    1. First of all, check whether increasing the SAMPLE RATE (if not already at FULLSCAN) alleviates the problem (although in my case it doesn’t – simply because I made sure it wouldn’t):

    You can follow the steps I provided in http://sqlblog.com/blogs/elisabeth_redei/archive/2009/03/01/lies-damned-lies-and-statistics-part-i.aspx.

    1. Provide an index hint (as I did in Fig 6 and Fig 7 above)
    2. Create a covering non-clustered index on (c2, c1) to avoid the entire table to be scanned (all data will be present in the leaf level of the non-clustered index). However, because the test1 table consists of only those two columns, there would be no effect.
    3. "Hide" the value you are looking for from the SQL Server optimizer so it will not use the Histogram to make the estimate.

    This will only work if the overall selectivity as presented by DBCC SHOW_STATISTICS ... WITH STAT_HEADER gives an estimate that works well for most instances of the query (i.e. no matter what value you pass in, you get satisfactory performance).

    You can achieve this by wrapping your query in a stored procedure and use a local variable:

    CREATE PROCEDURE usp_HideParam (@col1 INT)

    AS

    DECLARE @newcol1value INT

    SET @newcol1value = @col1

    SELECT c1, c2 FROM test1 WHERE c2 = @newcol1value

    (In my – extreme - case it doesn’t work very well at all because the Estimate will still be 1000)

    Obviously the reverse should be considered as well – if you want the optimizer to use the statistics in the histogram (which by the way is only created for the first column in the index), you need to make sure that the value you pass into the stored procedure is the value used in the statement.

    For instance, if I execute the following query, I will get an index seek because Estimaterows = 1 for 601 which is out of the range of values (Fig 1.):

    SELECT c1, c2 FROM test1 WHERE c2 = 601

    Whereas

    usp_HideParam 601

    .. results in Estimaterows = 1000 and consequently a Table Scan

    1. Wrap the query in a stored procedure and use the OPTIMIZE FOR hint.

    Again, to create a plan that performs satisfactory for most instances of the query.

    1. Use the USE PLAN hint

    Needless to say, neither of the above suggestions except for the second (providing an index hint), would work very well in the above reproduction scenario and wouldn't be very meaningful either. If the problem manifests itself only for one or very few values in the column it is hardly worth the effort.

    Nevertheless, for a more natural, but still skewed, data distribution where increasing the SAMPE RATE of the statistics does not alleviate the problem - it might be better to have a predictable – although not the best performance than a completely random performance pattern.

    * This script was put together with bits and pieces from http://sqlblog.com/blogs/michael_coles/archive/2009/03/03/practical-sql-server-xml-part-2.aspx and http://sqlblog.com/blogs/jonathan_kehayias/archive/2009/07/27/digging-into-the-sql-plan-cache-finding-missing-indexes.aspx and a whole lot of sweat! It works on 2005 and 2008 showplan output, on parallel as well as non-parallel execution plans.

  • Lies, Damned Lies And Statistics – Part I

    One of the biggest call generators at Microsoft SQL Server Support when SQL Server 7.0 was released was issues with statistics. In the beginning, they tended to be about the Autoupdate statistics feature interfering with user activity but when SQL Server 2000 came, and systems running SQL Server became larger, issues with skewed statistics/data and problems related to the sample rate used when gathering statistics emerged and in 2005, the challenges are the same. There are some new statistics features in 2008 but overall, this post applies to 2008 as well.

    First a few words on the symptoms that are typical for issues related to statistics. What they all have in common is seemingly strange variations in duration when the server has no resource problems and you have confirmed that there is no blocking:

    - The same query (with the same values in the search argument or WHERE clause) varies in duration without any related blocking or resource problem

    - Some queries degrade in performance as time passes since the last Index rebuild

    - The same procedure will perform different depending on what values are passed into it

    - Intermittent performance problem on some tables without any related blocking or resource problem

    Although all of these symptoms may be related to other issues, I always look into the quality of the statistics before I spend time on more time consuming troubleshooting.

    Background - Statistics Default Settings

    Even though the default settings (which is Autoupdate and Autocreate Statistics ON for all tables and indexes), will take you a long way, it is absolutely necessary to have a more granular approach as your databases and your workload grows. There is no tuning available that can work around a problem with lack of accurate statistics. The problems related to statistics (or lack thereof) are either about:

    - How often the statistics are updated

    - The sample rate used when the statistics is built

    One thing that you might not have thought about is when Autoupdate statistics will be triggered. A lot of people will say it is when a certain threshold for changes is reached (more details below) but that’s not the whole truth – it does not automatically start when the threshold is reached. It is part of the compilation or optimization process and will only be triggered when/if the threshold has been reached AND the index/column is touched by a query. The thresholds are really Recompilation thresholds and not Statistics thresholds.

    Simplified it works like this:

    clip_image002

    Picture based on diagram in http://technet.microsoft.com/en-us/library/cc966425.aspx

    There are some other processes that will trigger an update of the statistics. And not only that, it will trigger an update of the statistics WITH FULLSCAN (i.e. using all rows in the index to build statistics as opposed to Autoupdate statistics which will use some sample rate):

    - DBCC REINDEX (not INDEXDEFRAG)

    - CREATE INDEX …. WITH DROP_EXISTING

    If you do any of the above on an index, you will get the best statistics possible and it is not too uncommon to see cases when switching from Index Rebuild to UPDATE STATISTICS … WITH FULLSCAN, gives the same performance boost. It is also important to know that once the Autoupdate statistics process runs, it will reduce the sample rate of the statistics down to the default.

    If your maintenance window is getting crowded, and index rebuild is part of your maintenance plans, you might want to experiment with a combination of index rebuilds and statistics updates (needless to say updating statistics is not going to do anything for fragmentation should that be an issue).

    Now to the reasons why Autoupdate statistics might not work well in all situations:

    1. Statistics gets out-of-date or stale

    This is a problem related to large tables. What happens is that it takes more and more modifications for the Recompilation Threshold to be met. The formula for tables with more than 500 rows is:

    Rowmodctr 500 + 20% of total rows (rowcnt)

    (the two columns, rowmodctr and rowcnt, can be found in the sys.sysindexes compatibility view).

    The problem following this, is that the Optimizer will have somewhat misleading information about the cardinality of tables and indexes (i.e. a WHERE or JOIN clause returns more or less rows than expected).

    2. Again for large tables, the default sample rate (used by the Autoupdate Statistics feature) may not be enough because it gets relatively lower as the tables grow larger (to avoid issues with Autoupdate Statistics process taking too long).

    3. Data in the column is naturally "skewed" (e.g. LastName; Smith vs. Redei for instance)

    4. The Autoupdate Statistics process interrupts your user activity

    How do you diagnose and address these issues? In this post I will explain how to do that for the first 2 items in the list above; stale statistics and problems with default sample rate.

    Stale statistics
    ===========
    To diagnose, you would typically look at two things:

    - When is the last time the statistics were updated?

    - What is the ratio rowcnt/rowmodctr (for tables/indexes with rowcnt > somenumber)?

    The solution to this is quite simple; you just need to schedule manual UPDATE STATISTICS jobs on the tables when a threshold that you deem reasonable is met:

    SELECT

    ssi.id

    , object_name(ssi.id) AS tblName

    , stats_date(ssi.id,ssi.indid) as StatsDate

    , ssi.indid

    , ssi.rowcnt

    , ssi.rowmodctr

    , cast(ssi.rowmodctr as decimal)/cast(ssi.rowcnt as decimal) as ChangedRowsRatio

    , ss.no_recompute AS IsAutoUpdateOff

    FROM sys.sysindexes ssi left join sys.stats ss

    ON ssi.name = ss.name

    WHERE ssi.id > 100

    AND indid > 0

    AND ssi.rowcnt > 500

    AND (ssi.rowmodctr/ssi.rowcnt) > 0.15 -- enter a relevant number

    ORDER BY 3

    You would probably want to change the ssi.rowcnt and the ratio to something that is relevant for your database.

    The output will look something like this:

    clip_image003

    In this case, I have some statistics on a non-clustered index (indid > 1) which has not been updated for 3 years. The IsAutoUpdateOff = 0 tells me that Autoupdate statistics feature is ON for this particular index (this can also be viewed, or changed, with the sp_autostats procedure).

    Once you have confirmed that stale statistics is an issue, you have two options:

    - Update the statistics for all indexes in the database using sp_updatestats (if Autoupdate statistics is turned on, there is no point in using the ‘resample’ of option of sp_updatestats)

    - Update statistics only for the tables you have identified with UPDATE STATISTICS (use UPDATE STATISTICS <tablename> WITH ALL if you want to update all indexes of a table)

    Low Sample Rate

    =============

    This is slightly trickier than the problem with stale statistics; both to diagnose and to solve. The way you would know that sample rate is an issue, is if a query that you think have suspiciously high duration runs faster after you have run an UPDATE STATISTIC ... WITH FULLSCAN on it (or if you did an Index rebuild that gave you statistics with a full scan).

    It involves finding the point when the sample rate is high enough to create useful statistics but low enough for the process of updating statistics to fit into your maintenance window. You also need to decide how often the job should run.

    Your starting point should be the current sample rate which you will find with DBCC SHOW_STATISTICS:

    DBCC SHOW_STATISTICS ([schemaname.tablename], indexname) WITH STAT_HEADER

    Since this output cannot be presented with table results, I have attached a script that will present the output from DBCC SHOW_STATISTICS for all indexes for all tables and schemas in the current database. The output will look something like this:

    clip_image004

    What you want to look for is large discrepancies between Rows and RowsSampled (in the above output a full scan has been used to create the statistics so the numbers are equal).

    As mentioned, for the large tables where you believe that this might be an issue, you should start with some number above the default sample rate (RowsSampled/Rows) to get an idea of how long it will take to update the statistics with a certain sample rate. Then move the sample rate up until the execution time of UPDATE STATISTICS hits the threshold given by your maintenance window.

    Once you have found your ideal sample rate, you then need to set up jobs to run UPDATE STATISTICS ... WITH SAMPLE RATE XX, NO_RECOMPUTE and turn off the Autoupdate feature with:

    EXEC sp_autostats ‘tablename’, ‘OFF’

    If the NO_RECOMPUTE clause is omitted, Autoupdate statistic will be switched on and the next time that particular index passes the Recompilation Threshold, Autoupdate Statistics will be run with the default sample rate (which you have just concluded is not enough to provide the optimizer with accurate statistics).

    Next post I will show you how you can diagnose issues with “skewed” data/statistics.

  • Bufferpool Performance Counters

    Memory configuration and investigating memory conditions is still something that seems to cause a lot of confusion. There are numerous articles, blog posts etc. that talk about SQL Server memory, virtual memory, AWE and other factors but not much about the one thing that I personally think give a great understanding of the different counters we have learned to use to look at when investigating SQL Server memory.

    In this post I will talk a little bit about some mechanisms in the Buffer pool – i.e. the part of SQL Server memory where, above all, your data and index pages go and also at the performance monitor counters you can use when assessing the state of the Buffer pool.

    (When I refer to pages, I am exclusively talking about data and index pages. For log records, there is a special Log Manager and I will talk about how log records are written in another post.)

    First, let’s look at some Buffer pool concepts. A simplistic view of the Buffer pool looks something like this:

    clip_image001

    POOLS AND CACHE STORES

    Pools are used to cache homogenous, stateless data. All entries in the pool are considered equal - for example a pool of connections or a pool of network buffers.

    Cache stores are used to store stateful data and provide a set of memory allocation interfaces that can be reused by different consumers. For example the Procedure Cache which is divided into several different cache stores – e.g. one for ad-hoc sql plans, another for plans for stored procedures/functions/triggers and a separate for external stored procedures.

    The Free List

    SQL Server keeps a minimum number of free pages on the Free List to be able to serve incoming requests without any delay. The amount of free pages that SQL Server tries to keep in the Free list (“Min Free“ in DBCC MEMORYSTATUS output) is calculated based on the size of the Buffer pool and the amount of incoming requests (Page Life Expectancy, the expected lifetime of a page in cache, being one indicator).

    Writing and Freeing buffer pages

    SQL Server uses a Least Recently Used (LRU) algorithm for aging pages in the Buffer pool. Basically a counter is incremented every time a page is referenced and decremented every time the lazy writer process sweeps the page.

    Any worker thread will at certain points (for example when an asynchronous IO is issued) check the memory status of the Buffer pool to make sure that a healthy number of free pages are available for new incoming requests. If the number of free buffers is not satisfactory, two things can happen:

    If the upper limit of the Buffer pool has been reached (the limit being either “max server memory” or available memory in the operating system; both reflected in the SQL Server Memory Manager:Target Server Memory counter):

    - the lazywriter process (although run by the worker thread) will sweep parts of the Buffer pool (it keeps track of where it left off the last sweep) and based on when the page was last used either leave it or – in case the page is dirty – flush it to disk and then put the page on the free list. A dirty page will not be flushed until the corresponding log records have been written to disk (for more information, see Write-ahead Logging in Books Online).

    If the upper limit of the Buffer pool has not yet been reached:

    - it will commit more reserved pages into the buffer pool rather than to move a page that has been used previously to the Free List.

    This is how the Page Life Expectancy can stay very high on a server with little workload (or plenty of memory available) and also why Process:Private Bytes (sqlservr.exe) and SQL Server Memory Manager: Total Server Memory (more on these counters later) will keep growing even on a server with little activity.

    The lion’s part of writing and freeing buffer pages is actually done by “normal” worker threads.

    Lazywriter Thread

    Is a system thread that flushes out batches of dirty aged buffers and puts the corresponding buffers on the Free List in the same manner as described above. The main purpose of the Lazywriter is to maintain the Free List.

    Checkpoint process

    Is another system process that wakes up every minute and checks each database to see whether it has exceeded the Recovery interval. Its purpose is to flush dirty pages in order to keep the number transactions that have to be rolled forward during a potential recovery to a minimum (or rather to ensure that the sum of the recovery cost value for all transactions does not exceed the number of minutes specified in Recovery Interval). However, the Checkpoint process does NOT put buffers back on the free list.

    Eager Write

    Is a special write mechanism used for non-logged IO operations such as BULK INSERT and SELECT INTO. Its purpose is to avoid so called Buffer pool trashing (which is what happens when large amounts of data that is unlikely to be re-used gets read into cache).

    Now that we have some background – let’s look at the performance counters related to the buffer pool. You need to look at these counters altogether to get the complete picture. For instance, Page Life Expectancy is quite often all over the place - even dips to zero occasionally - even if there is no memory issue.

    SQL Server Buffer Manager:
    Lazy writes/sec
    Checkpoint/sec
    Free List stalls/sec
    Free Pages/sec

    Page Life Expectancy
    (Page Writes/sec
    Page Reads/sec)

    We already know that most writing and freeing buffers is made by individual worker threads. This means that on a very healthy system, Lazy writes/sec and Checkpoint pages/sec will be close to zero. However, if the worker threads cannot quite keep up because of the workload, you will see Lazy writes/sec climb up.

    You might also see Free Pages jumping up and down occasionally and that in itself is not a problem. SQL server can usually recover quickly from a zero Free pages situation. However, if you see Free List Stalls, you can be pretty sure that your workload would benefit from more memory or from tuning to reduce the memory requirements (whether that is index tuning, query tuning or maybe even just an update of statistics).

    To summarize, if Free Pages and Page Life Expectancy are intermittently dipping but you do not see any increase in Lazy writes/sec, the amount of memory available in the Buffer pool is probably enough for the workload (as measured by SQL Server Statistics: Batches/sec).

    The Page Writes/sec and Page Reads/sec gives you information of how much SQL Server is contributing to hard page faults on the system. Because I typically look at to me unknown workloads, I find it hard to say anything about the numbers as they are. I usually look at it in comparison to some benchmark numbers (or in comparison to hard page faults overall on the server).

    You might wonder why I have not included Buffer Manager: Buffer Cache hit ratio. The reason is simple; not all memory consumption will reflect in Buffer Cache Hit Ratio because not all, although most, memory is about data and index pages; you have things such as workspace memory or memory consumed by the optimizer. A high Buffer Cache Hit Ration doesn’t necessarily mean that all is well although a low cache hit ratio indicates issues with the Buffer pool (or rather the utilization of memory available in the buffer pool). Having said this, on a finely tuned OLTP system it works quite well as an indicator because you are typically not processing large amounts of data (that would require large workspace memory).

    If you look in Performance Monitor, you will notice you have a Buffer Partition and Buffer Node. This is because some structures are per processor and per NUMA node respectively. (Aside; on NUMA you will have one Lazywriter per NUMA node).

    There is another group of counters that concerns memory that requires special attention. On a finely tuned OLTP system, you normally don’t have issues with this but you should keep an eye on these counters as well when you are investigating the Buffer pool.

    SQL Server Memory Manager:
    Memory Grants Outstanding
    Memory Grants Pending

    Certain memory intensive queries require Memory Grants before executing and in order to balance the need for these queries with other user activity, SQL Server throttles the number of concurrent queries of this kind. Queries that require SORT or HASH operators will need a memory grant, or memory reservation, from the Memory Broker. The number of pending grants should be zero. If it’s not, it is a sign that some tuning or optimization should take place.

    Lastly, a recap on the memory counters for SQL Server overall memory:

    Process:Private Bytes:sqlservr.exe – all committed memory including the Buffer pool

    Process:Working Set: sqlservr.exe - the part of committed memory (Private Bytes) which is mapped to physical RAM

    SQL Server Memory Manager : Total Server Memory - the amount of committed memory used by the Buffer pool

    SQL Server Memory Manager : Target Server Memory – the amount of memory reserved for the Bufferpool. This number is set at SQL Server startup but will be adjusted if the server gets into a low-memory situation.

  • How To Get High Quality Information About Query Performance

      There are a zillion reasons why Profiler and its friends by far are the most potent tuning tools available for SQL Server DBAs. This post focuses on one of them but I will mention a few others at the end as well (download details for the tools mentioned are also listed at the end).

      With SQL Server 2005 a new world opened to SQL Server DBAs with the introduction of DMVs and there are even more available in SQL Server 2008. There are 2 groups of DMVs that has, right-fully, gained enormous popularity - the sys.dm_db_index_* DMV's and the sys.dm_exec_* DMVs.

      With the introduction of the DMVs, I have noticed that a lot of people are not inviting Profiler to their tuning parties anymore – I will show you why he (she?) will always be on my guest list.

      In this post, I am going to talk about the limitations of sys.dm_exec_query_stats and sys.dm_exec_cached_plans. I will then show you how you can get aggregated performance data with much higher quality using Profiler trace data and some of the easy-to-use lightweight tools that Microsoft and the community provide for free.

      Needless to say, there are situations when running a trace is not feasible and the sys.dm_exec_* DMV is a very welcome option. It’s never going to hurt anyone to tune whatever TOP queries you identify with these DMVs. For real-time troubleshooting, they are probably also your only option but then you need to be aware that sys.dm_exec_query_stats only contains aggregated data about query executions that has actually finished executing. Sys.dm_exec_request – or some combination - is a better option for real-time troubleshooting.

      However and whenever you use sys.dm_exec_query_stats, you need to be aware of its limitations.

      There are numerous scripts out there that queries sys.dm_exec_query_stats from different angles and they will typically look something like this:

      SELECT

      (total_elapsed_time/execution_count)/1000 AS [Avg Exec Time in ms]

      , max_elapsed_time/1000 AS [MaxExecTime in ms]

      , min_elapsed_time/1000 AS [MinExecTime in ms]

      , (total_worker_time/execution_count)/1000 AS [Avg CPU Time in ms]

      , qs.execution_count AS NumberOfExecs

      , (total_logical_writes+total_logical_Reads)/execution_count AS [Avg Logical IOs]

      , max_logical_reads AS MaxLogicalReads

      , min_logical_reads AS MinLogicalReads

      , max_logical_writes AS MaxLogicalWrites

      , min_logical_writes AS MinLogicalWrites

      ,(

      SELECT SUBSTRING(text,statement_start_offset/2,

      (

      CASE WHEN statement_end_offset = -1

      then LEN(CONVERT(nvarchar(max), text)) * 2

      ELSE statement_end_offset

      end -statement_start_offset)/2

      )

      FROM sys.dm_exec_sql_text(sql_handle)

      ) AS query_text

      FROM sys.dm_exec_query_stats qs

      ORDER BY [Avg Exec Time in ms] DESC

      The first time you ever used this DMV you probably thought that the gates of heaven just opened up in front of you. No more getting approval for running traces, no more copying and wading around in MBs or GBs of trace data. For a few chosen ones, this might have been the case but for most of us that was not really the case. Especially if you, like me, tend to look at many different systems and have limited amount of time to assess the workload.

      Depending on the access methods used on the particular server you are looking at, you might only see half of the truth, if you are lucky the output shows you something very close to the whole truth. But it will never show you the whole truth.

      Why Not The Whole Truth?

      The inherent problem with with sys.dm_exec_query_stats is that it shows you aggregations from sys.dm_exec_cached_plans. And contrary to what you might believe, not all query plans will go into the procedure cache. Even if you did know this, you may not realize the full implication.

      Some of the things that will NEVER be cached are:

      1. DBCC and BACKUP statements

      2. Any query that was issued with the RECOMPILE statement

      3. So called "zero-cost" plan queries - i.e. queries that are very cheap to compile but NOT necessarily cheap to execute.

      In addition to this,

      You cannot see the accumulated numbers for a query whose plan is never re-used (one that uses inline literals and is not parameterized implicitly or explicitly).

      Execution time for things like EXEC extended_stored_procedure or CREATE INDEX will not be included in sys.dm_exec_query_stats.total_worker_time.

      On top of this, a plan can get thrown out of cache at any point in time and once it is out, it will not show up in sys.dm_exec_query_stats.

      Unless you are certain that your ENTIRE WORKLOAD consists of queries that are parameterized and that none of your procedures contain “funky stuff” such as EXEC ext_stored_procedure; you absolutely have to use Profiler (or the corresponding sp_trace_* procedures) to get an accurate view of your workload.

      An example of an Ad-hoc query that would “escape” sys.dm_exec_query_stats is the following:

      USE Adventureworks

      GO

      SELECT

      sp.salespersonid

      , so.salesorderid

      FROM sales.salesorderheader so JOIN sales.salesperson sp

      ON sp.salespersonid = so.salespersonid

      WHERE so.salesorderid = 43860

      The above query will generate a new plan for each execution provided that the salesorderid is different and you will find a different entry in sys.dm_exec_query_stats and sys.dm_exec_cached_plan for each execution:

      SELECT

      total_worker_time as CPU

      , total_elapsed_time AS Duration

      , total_logical_writes AS TotWrites

      , total_logical_reads AS TotReads

      , qs.execution_count AS NoOfExecs

      , text

      FROM sys.dm_exec_query_stats qs

      CROSS APPLY sys.dm_exec_sql_text(sql_handle)

      Pic1

      Fig 1. The two executions show up as separate queries in sys.dm_exec_query_stats.

      For my second scenario, I created two stored procedures with the only difference that I added a WAITFOR (to simulate a call to e.g. an extended stored procedure) in one of them.

      CREATE PROCEDURE uspGetSalesPerson2 (@p1 int)

      AS

      BEGIN

      WAITFOR DELAY'00:00:10' -- uspGetSalesPerson WITHOUT the WAITFOR

      SELECT

      sp.salespersonid

      , so.salesorderid

      FROM sales.salesorderheader so JOIN sales.salesperson sp

      ON sp.salespersonid = so.salespersonid

      WHERE so.salesorderid = @p1

      END

      And then ran them:

      uspGetSalesperson2 44501

      go

      uspGetSalesperson 44501

      Pic2

      Fig 2. As you can see, the 10 WAITFOR seconds in supGetSalesperson2 are not accounted for in the total_elapsed_time (Duration) at all.

      What About SQL Server 2008?

      In SQL Server 2008, there is something called query fingerprints/query hash which will let you see aggregates for non-parameterized queries such as the above. It can also be used to identify which queries in your workload that would benefit from parameterization.

      However, it still requires that the plan was put in cache in the first place. The Activity Monitor uses this new feature but the Management Datawarehouse does not in the RTM version (you can read more about this new feature in Bart Duncan’s blog, “The Best New SQL 2008 Feature You Never Heard Of”, http://blogs.msdn.com/bartd/archive/2008/09/03/Query-Fingerprints-and-Plan-Fingerprints_3A00_-The-Best-New-SQL-2008-Feature-You_2700_ve-Never-Heard-Of.aspx).

      To address the problem with duration for statements other than SELECT/INSERT/UPDATE/DELETE, there is a new DMV in SQL Server 2008 called sys.dm_exec_procedure_stats which aggregates consumption for ALL statements executed within a stored procedure.

      Enter Profiler And Friends

      Anyone who has used Profiler knows that it in itself can do nothing about the above – in fact it does anything BUT aggregate data. This is when the friends should join the party. For the above query (first executed with so.salesorderid = 44501 and then with so.salesorderid = 43860), the equivalent output from my personal favorite friends, SQLNexus and Reporter, look like this:

      Pic3

      Fig 3. Notice the number of executes and how Readtrace has parsed the inline literal in the Query Template in order to be able aggregate the performance data.

      And for my second scenario with the missing WAITFOR duration:

      pic4

      Fig 4. Notice the difference in Duration for uspGetSalesperson2 compared to Fig 2.

      So what did I do? I first ran a Profiler trace with the default template adding just the StmtStarting and StmtCompleted events (although in a real world scenario you would want to add some more events and you might also save the trace definition as a script and use the sp_trace_* procedures to avoid the overhead of the Profiler GUI).

      I then ran SQLNexus, connected to one of the SQL Server instances on my machine and imported the trace file I just created. SQLNexus will at this point create a database called SQLNexus (default which can be changed) where the crunched trace data goes and then generate a number of reports. The number of reports available depends on what events you included in the trace.

      pic5

      Fig 5

      If you switch to “ReadTrace_Main” you will find the output found in Fig3 and Fig4 at the bottom of the screen. As you can see in Fig 3 & 4, the information is somewhat limited and there is certainly some vital information missing. For instance, I like to know whether there are variations in Execution times to pinpoint intermittent bad execution plans, resource or blocking issues.

      Pic6

      Fig 6.

      By clicking the statement in the grid, you will get to the “Unique Batch Details” report which at the end has the following data:

      Pic7

      Fig 7.

      In order to get that information, you can query the data in the SQLNexus database as well. In addition to all the data necessary to produce the reports in the SQLNexus GUI, it also contains a number of views you can query (or you can query the base tables directly). You can also make your own custom reports.

      To get the data in one single report, I ran the following SELECT statement:

      USE sqlnexus

      GO

      SELECT

      vwU.NormText

      , vwB.CompletedEvents

      , vwB.Duration

      , vwB.AvgDuration

      , vwB.Reads

      , vwB.AvgReads

      , vwB.Writes

      , vwB.AvgWrites

      , vwB.CPU

      , vwB.AvgCPU

      , vwB.HashID

      FROM readtrace.vwBatchUtilization vwB

      INNER JOIN readtrace.vwUniqueBatches vwU

      ON vwB.hashid = vwU.hashid

      Pic8

      Fig 8. Notice the HashID column to the far right – that is basically a “query fingerprint” for this particular query.

      Using Reporter, Baseline and Comparison Data

      One really cool feature of Reporter, is that you can load two different trace data databases and look at comparative data. So you could have one baseline database and then at any point in time, capture a new trace and compare the results in Reporter:

      Pic9

      Fig 9. Comparative data from Reporter.

      It will also identify which queries are found in which trace data set (i.e. if there are new TOPn queries on the scene or if any of your old TOPn queries are gone). Be aware that you can change the TOP parameter to show anything between TOP 3 to TOP 100.

      Pic10

      Fig 10. Identifying new queries with Reporter.

      More Information About the Tools

      As mentioned, SQLNexus has a number of drill-downable reports available and it can also import Performance Monitor output and WAITSTATS (sys.dm_os_wait_stats output) data. Reporter on the other hand, has the same drill-down reports but works only with trace data (and some extra features such as the comparison featured mentioned you will not find in SQLNexus).

      Both utilities require ReadTrace, one of the RML Utilities released by Microsoft and the data format for both Reporter and SQLNexus is the same. By default, ReadTrace creates a database name PerfAnalysis but if you run ReadTrace via SQLNexus, the name will, as mentioned earlier be SQLNexus by default.

      In SQLNexus, you can point to the trace files and it will import them for you whereas Reporter requires you to first run ReadTrace to create the database. If you are a little bit lazy like me (or “productivity focused” as I like to call it) you can use the GUI in SQLNexus to create the trace data database and then point Reporter to that database.

      Another perk is that once you have the trace data, you can easily feed it to the Database Tuning Advisor (which is a good idea considering the limitations of the sys.dm_db_index_* DMVs… but that’s another story… and an upcoming blog post).

      My strong recommendation is that you at least every now and then try to find a window where you can capture trace data to make a full assessment of the workload of your SQL Server(s).

      If the above is not necessary to convince you to bring along Profiler and his cool friends, I will mention a few other reasons:

      1. The ability to Replay trace data. For instance, before applying a service pack, capture a trace of your critical workload and Replay it against an upgraded test server.

          (With OStress, another RML Utility, or the built-in “Replay events using multiple threads” feature of Profiler, you can simulate multiple users.)

          1. Profiler will also catch the number of rows returned to the client so you can identify queries that are returning over-the-top amounts of rows. These queries are not necessarily expensive to execute.

          2. You can easily detect if cursors are used and in particular cursors with a low rowfetch size

          3. It will give you information about runtime errors and performance related warnings (such as HASH and SORT warnings)

          4. If you have a performance monitor log from the same time window, you can correlate the data from within the Profiler GUI.

            In essence, there is nothing else available that will give you a complete picture about the workload on the server. The best method to get to know a system is to gather and analyze trace data (with whatever tool you are comfortable with) and maybe even spend a bit of time just browsing through a trace of some of your more critical workload(s).

            More Information and Download Areas

            You can find (a lot) more information about the tools in the following article. At the end of the article you will find links to download areas:

            “Precision Performance for Microsoft SQL Server using RML Utilities 9.0”

            http://sqlcat.com/technicalnotes/archive/2008/02/01/precision-performance-for-microsoft-sitql-server-using-rml-utilities-9-0.aspx

          Powered by Community Server (Commercial Edition), by Telligent Systems
            Privacy Statement