THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Lara Rubbelke

Interesting Things in the World of SQL Server

Memory Pressure on 64 Bit SQL Server 2005

I have just learned that my old blog will be disabled, so I will be recycling some of the more meaningful blogs to SQLBlog.  This was written last year during an engagement where I was troubleshooting an issue with procedure cache growth.  The good news appears to be that some relief is on the way (SQL Server Procedure Cache: More Relief on the Way). 

Do you remember the days when you first started learning about 64 bit capabilities and dreaming of the day when you were no longer confined by the barriers of the 32 bit virtual address space?   Those ideas produced dreams of a truly capable scale up solution.  Yes, 64 bit has been a blessing for many of our SQL Server databases.  But as is often the case, there are some drawbacks.  Such is the case for some 64 bit SQL Server 2005 database applications. 

In the 32 bit OS, you have a total of 4 GB of virtual address space, with 2-3 GB of virtual address space available for the applications (depending on how you configured your boot.ini file).  The 32 bit version of SQL Server is very busy managing the procedure cache, user cache, log cache, extended stored procedures, binaries and buffer (data) cache within the limited confines of the 2-3 GB virtual address space.  When you enable AWE on 32 bit SQL Server, you are extending the buffer (data) cache, and not able to address the available memory for any of these other items – including the procedure cache.  SQL Server must still manage the procedure cache within the constraints of the 2-3 GB of available virtual address space.

Now let’s consider 64 bit.  These confines are history – and we can now enjoy the capabilities of a very large user cache, very large log cache, and as many are discovering a very large procedure cache.  As was the case with one customer this week, the procedure cache was using over half the available RAM for the instance.  Note:  This happens in specific scenarios.  I am not saying that 64 bit is bad, only that in some scenarios it is important to understand where you need to adjust your management of a database.

Consider an application which executes a large number of non-parameterized ad hoc statements and dynamic SQL.  This is particularly an issue when executed with EXECUTE vs. sp_execute. See chapter 7 in “Expert SQL Server 2005 Development” for an excellent discussion on the difference (shameless plugJ).  When a statement is executed, SQL Server searches the procedure cache to see if a plan exists for the statement.  If one does not exist, SQL Server will save an execution plan in the procedure cache.  This can quickly lead to a large procedure cache.  These statements may never be used again, but SQL Server cannot differentiate between a recently executed ad hoc statement that may be reused and one that will not ever be reused.  If the plan is considered expensive – or one which requires a lot of work to generate – SQL Server will not age this plan out very quickly.  Since the 64 bit world has the luxury of larger volumes of RAM in which to manage the procedure cache, SQL Server may prefer to keep these large expensive plans in cache. 

Why is this a problem?  For starters, when SQL Server is under memory pressure it may instead flush pages from our valuable buffer (data) cache instead of flushing what it determines to be expensive plans in the procedure cache.  A larger procedure cache may also impact the time for SQL Server to search the procedure cache to determine if a plan exists.  When the procedure cache is full of meaningful and reusable plans, perhaps this is time well spent.   When your procedure cache is full of plans which will never be reused, it is a waste of resources and a waste of time.

Currently, there are a few options to manage the problem if you are not able to fix the code.  Of course, fixing the code is the best option.  Reduce or eliminate dynamically built statements in application code.  Reduce or eliminate ad hoc statements, or at least parameterize them.    When this is not an option:

On regular intervals execute DBCC FREEPROCCACHE. 

This will flush the procedure cache.  This is the best option when you have exhausted all other options.  Flushing the procedure cache may cause your database application to slow down as SQL Server compiles and stores execution plans for procedures and statements.  In one of my past engagements, this was the only option given their environment.  They had set the database to 8.0 compatibility mode, which did not give them the luxury of the next solutions.  In the end, they fixed their code to parameterize the dynamic SQL and they no longer had problems.  (An interesting note is that this issue was caused by one small batch process that ran nightly - sometimes the little things can hurt a lot!)

Use Forced Parameterization on the databases which are causing this issue. 

Forced parameterization will replace any constants in an eligible statement with a variable when the query is compiled.  This is particularly suited when your application builds the same statement with differing constants.  Consider the following statements:

SELECT * FROM Production.Product WHERE MakeFlag = 1 AND ListPrice > 0

SELECT * FROM Production.Product WHERE MakeFlag = 0 AND ListPrice > 1

A single parameterized statement is stored in the procedure cache:

(@0 int,@1 int)select * from Production . Product where MakeFlag = @0 and ListPrice > @1

In one customer situation, the procedure cache contained tens of thousands of large duplicate statements with differing constants in the where clause.  Forced parameterization significantly reduced the number of statements stored in the procedure cache.

This option is best suited when there are a large number of statements that are regularly executed with varying constants in the WHERE clause.  There are tradeoffs with this method, and I would strongly recommend reviewing the requirements and risks in BOL.  Note: This is not available if your database is in backward compatibility (8.0 or lower).

Use a Plan Guide to Force Parameterization at the Statement Level

This is the ideal option when you know that there are only a few statements which are causing the procedure cache to grow.  This will be the least intrusive way to force parameterization.  This option will allow you to still maintain a procedure cache and enjoy execution plan reuse.  This may not be a reasonable solution when you have hundreds of statements, as you will need to build a plan guide for each statement.  Note: This is not available if your database is in backward compatibility (8.0 or lower).


Published Friday, April 18, 2008 8:00 AM by Lara Rubbelke



Uri Dimant said:

Hi Lara

Tony has a great article about the subject

I think that TokenCache size plays its role as well, I'm not sure  that it directly remates to the Procedure Cache but it certainly slowes down the server

April 21, 2008 12:53 AM

Maciej Pilecki - SQL Server lessons from the field said:

The problem of the procedure cache being inflated with ad-hoc SQL plans that are almost never reused

September 23, 2008 10:56 PM

Nichole said:

This is a good description, but I have another issue with the proc hit ratio. We're not running out of space, not even close (15MB Max Memory and only 179MB being used by proc cache). The problem we're faced with is that the hit ratio goes down (has fluctuated between 82-92%) while were load testing with only one query. What would you suggest investigating to resolve such poor performance? Any help you can provide would be great -- we've been struggling with an intermittent performance issue for months and would like to just be done with it!

December 22, 2010 6:56 PM
New Comments to this post are disabled
Privacy Statement