THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

Lock Pages in Memory and Large Page Extensions

The most important setting for SQL Server today is probably the Lock Pages in Memory option set in Local Security Policy, User Rights Assignment. Enabling this option on recent versions for the SQL Server user account in Enterprise Edition on systems with 8GB+ more memory also activates Large Pages. It used to be that Large Page support was activated by Trace flag 834, which required the Lock Pages in Memory right. As typical, Microsoft officially provides no guidance or assistance in trying to quantify the benefit of this setting.

I recall a presentation by Thomas Kejser at SQL Bits that mentioned this setting doubled performance in one particular situation. Based on the description of Lock pages in memory and large page allocation, it seems perfectly reasonable that these two setting should have meaning performance impact. So naturally I attempted to investigate this in my test environment, and found zero measurable different. I was certain that the principles in this were correct, but had to accept that I was missing something and left this matter alone.

Just now, I am doing an assessment of the storage performance for a system already in production. This system has several instances of SQL Server, and hundreds of other processes running, stopping and running. In the storage performance sequential IO tests, I wanted to look at a table scan reading from disk, both loading into an empty buffer cache and a full buffer cache forcing the eviction of other data. During the testing, I changed to the SQL Server max server memory several times between 10GB and 70GB, with the test query being a 60GB table scan.

The most pronounced behavior was observed when the SQL Server instance maximum server memory was raised from 10 to 70GB (with 10GB memory already allocated from the OS before the query start). The table scan query would cause SQL Server to start allocating additional memory from the OS up to the 70GB limit. The query executed at a horrifically slow rate, averaging 16MB/s. When SQL Server was not allocating memory from the OS, the table scan generated disk IO at 2.0 to 2.3GB/sec. This is for all combinations of loading into an empty or full buffer cache with and without eviction. The consumption rate with data in memory was just over 1GB/s per core showing excellent scaling from degree of parallelism 1 to 10.

None of the SQL Server instances had the Lock Pages in Memory right assigned, and hence all SQL Server processes were using the conventional memory manager. The operating system had been running for over three weeks with several SQL Server instances and hundreds of processes that started and stopped for various reasons. Overall system physical memory in use probably never exceeded 85% (of 1TB).

My interpretation of what happened was that the physical memory range had become heavily fragmented over the period with so many processes allocating and de-allocating memory pages (4KB). I would presume that the operating system can allocate 4KB pages from the free list much faster 16MB/s. When processes de-allocate memory pages, the OS must first zero out the page before returning it to the free list (Demand Zero Fault/s). There must be something that makes memory allocation very expensive, considering that the OS must track 256M x 4K pages = 1TB.

This could why my test system showed no difference between Large Page Allocation and the conventional memory manager. The test system had only a single instance of SQL Server, few other running processes, and tests were usually conducted from a clean boot. Even if I had changed SQL Server memory up and down several times, nothing would have fragmented the physical memory? On top of the, the nature of the queries could have consumed more cycles than the underlying memory address translation for either 4K or large pages?

There are some implications in using lock pages in memory which automatically enables large pages allocations if the conditions are met. First is that memory is allocated on SQL Server startup. Second, the operating system cannot page locked memory. So the assumption is that the remaining memory is sufficient for the OS and other user processes. A side effect is that Task Manager only reports working set, and not the private memory of the locked pages.

I am speculating that it might be possible that if a SQL Server instance with lock pages were stopped, an attempt to restart later could fail if the OS cannot allocate sufficient contiguous 2M blocks. So this would require booting the system. So the locked pages option should be only used for servers dedicated to SQL Server. Careful planning on memory setting is needed for multi-instance.

Below is a good starting point on this topic

http://blogs.msdn.com/b/psssql/archive/2009/06/05/sql-server-and-large-pages-explained.aspx

 

Published Tuesday, September 17, 2013 10:46 AM by jchang

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

tobi said:

Interesting case. Consider profiling the system with ETW (xperf or perfview). I think ETW can also profile the kernel. You should see a clear hotspot somewhere. Symbols should tell which component of SQL Server or Window is the culprit. Maybe this can give rise to a workaround, or a deeper understanding.

September 17, 2013 3:32 PM
 

jchang said:

The hot spot must be in Windows, part of memory allocation. SQL Server query performance is inline with what I have measured on other systems so long as SQL Server was not allocating memory from the OS.

The query performance showed correct scaling with parallelism up to to the single socket 10-cores, there was a drop-off in efficiency only when parallelism was spread across 2 sockets. Other activity on the system prevented meaningful full 4 socket scaling tests.

September 17, 2013 10:59 PM
 

jchang said:

perhaps this might help

With SQL Server at max memory allocation from OS, table scan disk IO is 256K at DOP 1, and 512K at higher DOP.

While SQL Server is allocating memory from the OS, IO size is 64K

September 18, 2013 6:20 PM
 

Ewan said:

Hi Joe

Timely article for me. I've started looking into this as well and am unsure of the cost/benefits trade-off. I've been challenged by our VM admins to enable Large Pages as part of our standard build, but can find no convincing evidence that I should do this unless tested for specific applications (so therefore, not standard build material).

Are you sure that Large Pages are enabled in EE for 8GB+, or is Large Pages Support enabled (as per Bob's article linked)?

I don't have an EE to test at the moment but I have a recent 2012 Developer build (11.0.3368 LPIM, 32GB) and it doesn't enable Large Pages without 834. (It does enable support) Usually I expect the same behaviour from Developer and EE.

Ewan

September 19, 2013 8:14 AM
 

jchang said:

Ewan: Developer edition has the complete set of EE features - so developers can develop for EE. (of course they don't want us to develop to SE).

T834 was once necessary, perhaps in 2005, by 2008R2, its automatic with LPIM.

In the SQL error log you will see:

8/28/2013 8:50:22 PM Server Large Page Extensions enabled.

8/28/2013 8:50:22 PM Server Large Page Granularity: 2097152

8/28/2013 8:50:22 PM Server Large Page Allocated: 32MB

8/28/2013 8:50:22 PM Server Using locked pages for buffer pool.

if not, then

8/17/2013 2:17:27 AM Server Using conventional memory in the memory manager.

Or query sys.dm_os_process_memory for the columns large_page_allocations_kb and locked_page_allocations_kb

I would be very leery of putting a critical DB on VM, certainly not a DW system.

September 19, 2013 9:12 AM
 

Ewan said:

Hi Joe

Thanks for the response.

We don't have any DW servers on VM, but we do use VMs extensively. So far so good, although I too am leery of going VM on high-resource consumers.

On 2012 Developer, and EE up to 2008, I see everything up to: 'Using locked pages for Buffer Pool'. Which is the critical line.

My understanding from Bob's note on Large Pages, was that this last line would only appear (to show LP used for the buffer pool) if you used 834. Since my 2012 Dev edition doesn't use LP without 834 I expect EE to behave the same way, hence the question.

Ewan

September 19, 2013 9:25 AM
 

jchang said:

I have several SQL Server 2008 R2 and 2012 systems. None set the 834 flag but show the full set of Large Page and locked pages entries. The instances with LPIM say conventional.

Large Pages always required LPIM to activate, but is should be possible in the previous versions to use locked pages without Large Page

Now that I think about it, I do not have Dev edition installed any where, I wonder if there is a difference in Dev Ed?

September 19, 2013 3:04 PM
 

Ewan said:

Hi Joe

I still think there's something missing here, but I've only just starting looking into this.

IIRC, LPIM is always required for Large Pages

From Bob Ward's article:

So in summary:

•Large page support is enabled on Enterprise Edition systems when physical RAM is >= 8Gb (and lock pages in memory privilege set)

•SQL Server will allocate buffer pool memory using Large Pages on 64bit systems if Large Page Support is enabled and trace flag 834 is enabled

Where I have enabled 834, I get the following message:

2013-09-18 22:21:39.190 Server Large Page Extensions enabled.

2013-09-18 22:21:39.190 Server Large Page Granularity: 2097152

2013-09-18 22:21:39.270 Server Large Page Allocated: 32MB

2013-09-18 22:21:39.560 Server Using large pages for buffer pool.

2013-09-18 22:21:41.160 Server 24000 MB of large page memory allocated.

With 834, I get LARGE PAGES for buffer pool, not LOCKED pages for buffer pool. I should have double-checked this before my last post - sorry

24000 is the max memory setting and it all gets reserved at startup. If 834 is not enabled, normal memory management takes place and memory allocation grows with usage.

Again, tested only with Dev Ed so far.

Ewan

September 20, 2013 3:44 AM
 

Ewan said:

correction:

The entries from the log above are from a SQL 2008 server.

On my 2012 server, regardless of if 834 is applied or not, I do not get the

'large pages for buffer pool', or

'x MB of large pages allocated' messages at startup

But, the entire max mem allocation is not reserved at startup unless 834 is used (only 230MB for this idle instance).

Bob's article says that the reason all of the memory is preallocated at startup is to not impact queries while the server requested more memory for the Buffer Pool from VirtualAlloc()

So, either VirtualAlloc() is much quicker now and LargePages can be assigned dynamically, or Large Pages are not used in the BufferPool unless 834 is used.

I haven't checked how Large Pages growth changes as BPool usage grows yet. But if the behaviour is the same as 2008 it will not use large pages for the buffer pool at all I think, without 834.

Ewan

September 20, 2013 7:04 AM
 

Mark Alvarado said:

thank you very much for post. i will come back.

www.janilink.com/propane-floor-buffer

July 8, 2014 1:46 AM

Leave a Comment

(required) 
(required) 
Submit

About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement