THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

The Rambling DBA: Jonathan Kehayias

The random ramblings and rantings of frazzled SQL Server DBA

Troubleshooting the SQL Server Memory Leak (or Understanding SQL Server Memory Usage)

I guess this could be considered a continuation of my last post about SQL Server Memory, Understanding the VAS Reservation (aka MemToLeave) in SQL Server, but for a long time I have noticed that when people don’t understand how SQL Server uses memory on their server, they immediately begin to think that SQL Server has a memory leak.  So in this post I’ll dive into how SQL Server allocates and uses the physical memory available on a server.

Proper management of server resources is crucial to maintaining good performance, and as we all know a server only has a finite amount of resources.  Often when performance problems occur, the first counters looked at by system administrators are the CPU utilization and Memory Utilization counters, most often I would suspect through the Windows Task Manager and then PerfMon.  When a Windows Administrator/User that doesn’t understand SQL Server sees that SQL Server is consuming 2.7GB of memory on a 4GB server, for whatever reason, the first conclusion that they come to is that SQL Server has a memory leak.  Nothing could be farther from the truth.  This has become even more common lately as 64 bit servers become more common and the expanded VAS (I covered this in my last post) allows SQL Server to commit all of the available memory on the server without advanced configuration of options like AWE.

So what exactly is going on here?  Well first recall that SQL Server has two types of memory usage, VAS (aka MemToLeave on 32bit servers) and the Buffer Pool or BPool.  When SQL Server starts up the maximum size of the BPool is calculated based on the servers configuration.  For 32bit servers without AWE enabled, the VAS reservation is calculated and reserved first, then the remaining user mode VAS is compared to the amount of physical memory available on the server.  The smaller of the two values is then compared to the max server memory configuration option and the smallest number becomes the maximum size for the BPool.  For example, a 32bit server with 4GB RAM, the /3GB boot.ini switch, and a max server memory configuration of 2048MB the maximum size of the BPool would be 2048MB:

3072MB (3GB) user VAS - 384MB = 2688MB available VAS
4096MB (4GB) Physical Memory > 2688MB available VAS
2688MB available VAS > 2048MB max server memory configuration

However, when AWE is enabled the BPool maximum size isn’t restricted by the available user mode VAS.  Instead the maximum size of the BPool is based on the physical memory on the server or the max server memory configuration setting, whichever is smaller.  For example, a 32bit server with 8GB RAM, the /PAE boot.ini switch, AWE enabled, and a max server memory configuration of 6144MB the maximum size of the BPool would be 6144MB:

8192MB (8GB) Physical Memory > 6144MB max server memory configuration

For 64 bit servers, there is no need to use AWE to allocate memory above 3GB for SQL Server since the user mode VAS is 8TB there is always ample VAS to utilize all of the physical memory available on the server.  However, the AWE mechanism for allocating memory is still useful and available on 64 bit servers.  It is used whenever the  the Lock Pages in Memory security right has been granted to the SQL Server Service Account and can improve the stability and performance of a 64 bit system. (see Slava Ok’s blog post Be Aware: Using AWE, locked pages in memory, on 64 bit ) For 64 bit servers, the maximum size of the BPool is the size of physical memory or the max server memory configuration, whichever is smaller.  For example, a 64 bit server with 16GB RAM, and a default max server memory configuration, the maximum size of the Bpool would be 16GB. 

Once the calculations have been made, the address space for the BPool is reserved, but not committed.  SQL Server will only commit the physical memory as it needs it.  However, and this is what most people don’t seem to understand, once committed the memory will not be release back to the operating system unless the server gets into memory pressure.  (see Slava Ok’s blog post Q & A: Does SQL Server always respond to memory pressure?)  When data is needed for a query, SQL Server first checks the BPool to determine if the data is already available.  If it is, the data in the BPool is used to run the query, which is why performance improves after the first time a query executes.  If the data isn’t in the BPool, it is read from disk into the BPool where it is then used to run the query.  SQL Server attempts to keep data loaded in the BPool for as long as possible once it has been physically read from disk, and will commit additional memory up to the maximum calculated limit to accomplish this.  Once the maximum calculated limit has been reached, stale data is flushed from the cache by the memory clerks based on the counter ticks maintained internally in the caches (See Slava Ok’s blog post SQLOS Caching). 

Since the operating system still requires memory to operate, you should set the max server memory configuration option to ensure that the operating system has sufficient memory once SQL Server has committed its memory for use.  This is especially important on 64 bit servers where the larger VAS allows for the SQL process to take all of the physical memory which can result in a paging situation for the SQL processes, degrading performance.  For this reason, 64 bit servers should use the Lock Pages in Memory right so that the AWE Mechanism is used to allocate memory, preventing paging of the BPool.  In the RTM of SQL Server 2005 and SQL Server 2008, this was a Enterprise Edition only feature.  This however was changed based on community feedback and is available in Standard Edition through the use of a trace flag 845 when the appropriate updates have been applied to SQL Server (see An update for Standard SKU Support for Locked Pages…)

For further information see the Memory Management Architecture Topic in the Books Online

Some example posts on this topic from the MSDN Forums are:

SQL server memory leak
how to release memory from sql
Max Workspace Memory (KB) is high
sql takes too much ram
Memory consumption for SQL Server
Memory Consumption?
sqlservr.exe is continually eating more memory
Memory getting full

Hopefully this clears up the misunderstanding regarding how SQL Server allocates and consumes memory.


Based on Aaron’s comment (thank you Aaron for pointing it out), I should clarify that the Task manager doesn’t show the correct numbers for memory allocated to SQL on 64 bit servers and servers using AWE.  To get the amount of memory used, you should read the following memory counters:

Performance object: Process
Counter: Private Bytes
Instance: sqlservr

Performance object: Process
Counter: Working Set
Instance: sqlservr

The Private Bytes counter measures the amount of memory that is currently committed. The Working Set counter measures the amount of physical memory that the process currently occupies. SQL Server also uses the following performance counter to expose the amount of memory that the buffer pool allocates:

Performance object: SQL Server:Memory Manager
Counter: Total Server Memory(KB).

Note If the instance of SQL Server 2005 64-bit is a named instance, the name of the performance object is MSSQL$InstanceName: Memory Manager.

Published Monday, August 24, 2009 10:32 PM by Jonathan Kehayias



AaronBertrand said:

Even before all of this conversation, there is a more prevalent misunderstanding that a lot of users have: they check memory usage for sqlservr.exe in Task Manager, which we know very rarely tells the truth in 32-bit-land.

August 24, 2009 9:45 PM

Jonathan Kehayias said:

It gets even more confusing on Windows Server 2008 and Windows 7 if you look at Task Manager.

August 24, 2009 9:55 PM

AaronBertrand said:

Also see VMMap 2.2 (a very useful Mark Russinovich / Bryce Cogswell / sysinternals tool):

August 25, 2009 12:47 PM

Grumpy Old DBA said:

One point which is often missed is that when DEP is enabled ( most pcs and servers now ) you don't need the /pae switch to make memory available ( x32 ) to sql server.

August 26, 2009 4:09 AM

colinLR said:

DEP Enabled servers/pcs make all memory available for awe - you don't need the /PAE switch - this sometimes catches people out when allocating memory

August 27, 2009 3:15 AM

Marshall Sutherland said:

Have you ever seen any ideas for how to handle memory allocation for multiple instances on a single server? I have X instances set to more than 1/X of the server's memory in the hopes that the instances which need more will grow at the expense of idle ones, but I always worry that I'm starving the OS when I do that.

August 28, 2009 12:12 PM

delcons said:

Thanks Jonathan - this cleared up some confusion for me.

February 11, 2010 2:09 PM

Raghvendra Rai said:

This very good explaination  about memory use.

February 15, 2010 7:34 AM

The Rambling DBA: Jonathan Kehayias said:

Last Thursday, March 25, 2010, the topic of Virtualization of SQL Server came up in the SSWUG Newsletter

April 2, 2010 1:08 PM

Jonathan Kehayias said:


That is correct.  For more information on this see my blog post:

It covers that topic specifically.

October 20, 2010 10:12 PM
Anonymous comments are disabled

This Blog


Privacy Statement