THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Merrill Aldrich

Why Hekaton In-Memory OLTP Truly is Revolutionary

I just returned from the PASS Summit in Charlotte, NC – which was excellent, among the best I have attended – and I have had Dr. David DeWitt’s talk rolling around in my head since he gave it on Thursday. (Dr. DeWitt starts at 27:00 at that link.) I probably cannot do it justice, but I wanted to recap why Hekaton really is revolutionary, and not just a marketing buzzword.

I am normally skeptical of product announcements, and I find too often that real technical innovation can be overwhelmed by the desire to sell product, and then buried under so many layers of hype that it can be hard to decipher the real value. Ask me about *cough cough* Exadata sometime, or Big Data, if you want proof. In-Memory Databases can also tend this direction.

This is not that. Dr. DeWitt was genuinely excited by the technology itself, and so am I. So what is it, and why is it important? First let me talk about what it isn’t. I have to use the line of Dr. DeWitt’s argument a lot here, so I hope he will forgive me for borrowing it.

Hekaton is not Just Your Tables in RAM

Simply having enough memory to hold the active portions of a database in memory is not enough to make real progress on the performance of database systems in today’s hardware. If you have only worked on systems that are disk-bound, it’s easy to imagine that solving the disk problem would immediately resolve everything. I have worked with a few people who argued along that line: “If only we could fit the database in RAM, queries would be instantaneous.” However, as soon as you work on a system with a larger memory resource and still see how long it still takes to process data, just between the RAM and CPU, you quickly realize that huge RAM is not a silver bullet. Even with all the data in a conventional buffer cache, with little disk IO or very fast disk IO, there’s certainly a ceiling.

What makes Hekaton significant is that the engineers realized that to make orders of magnitude performance increases in in-memory databases, some deep architectural changes would be required. I think, over and above Columnstore, that this is the most significant change in database tech that I have seen. This is a change that keeps the basic principles of the relational model intact, while totally revising the engine inside the server. It’s a bit like making a car, which all of us can still drive, and which feels familiar, with a brand new kind of engine under the hood. Not just a faster engine -- a different kind of engine.

This is still ACID, still driven by transactions and SQL, and we still interact with it using the same principles, but because there’s a new technique for working on rows under the hood, it’s dramatically faster than just tables in memory behind the existing query processer.

It’s not Just Removing “SQL-type” Relational Locks

One of the sound-bite friendly bits of this release is that it’s “lockless” or “latch-less,” and that’s a concept I can imagine being hyped and creating a lot of misconceptions. This is not just SQL Server running without locks. If that were the case, it would not work.

First, some background. There are two kinds of locks: locks down at the programming level of the server software itself that are used for code synchronization and locks that we use at the relational level to enforce transactional consistency. I find it helps to think of it this way: the developers that write SQL Server have to create parallel processing code for the server that works without crashing or memory corruption. It has to be stable, while allowing multiple threads to run at the same time, accessing shared items in memory but not corrupting them. In Comp. Sci. or non-SQL circles this synchronization is an important ingredient that makes parallel processing possible, and it’s conventionally implemented with “locks.” Note this, though: synchronization is mandatory. Locks are just the developer’s traditional method to deliver that synchronization. But this is not related to database transactions. It’s not the locks we talk about in SQL Server. It’s about whether the server code will actually run. I’ll return to this in a moment.

To prevent confusion, when we DBAs have to fiddle with the server at this level to troubleshoot issues, the naming convention in SQL Server land for this code-sync lock is a latch. So, when we mean source-code level parallel thread safety we refer to the thing that implements that as a latch, while developers working on other programs than SQL Server might call it a lock.

Atop this, the database system itself should deliver to its users an extra layer of transactional consistency that makes the data in the database logically correct. This is also mandatory. Database Locks are the conventional method of implementing transactional consistency at the database level for users.

So:

Code-level synchronization is required so the server doesn’t crash or behave erratically.

Transactional consistency is required so that we get the results we expect from the database.

Locks and latches are a conventional and proven way to achieve the dual requirements of synchronization and consistency. You can’t just take them away without providing a substitute that also works. This is where the genius starts to show in Hekaton.

To remove the delays created by database-level transactional locking, they didn’t just rip the locks out. They engineered an optimistic, row-versioning-driven way to process changes that is still transactionally consistent. Without that innovation, not having locks would be a problem, not an optimization. Hekaton is remarkable for providing that alternate means that still provides consistency, without the use of locks as its implementation. At a high level it looks a bit like snapshot isolation does for reads, except that it works for writes, and it’s much more optimal for working with in-memory data structures.

It’s not Just Removing Latches

The new query processing method, using optimistic, row-versioned concurrency even for writes inside the engine, would, on it’s own, be a huge accomplishment. But wait! There’s more. To return to the question of code synchronization, we still need that down at the binary level to make the server run. That requirement doesn’t go away.

So what happened here? Well, as Dr. DeWitt describes, a gentleman called Maurice Herlihy did work in the area of providing code synchronization with a different, far faster, method than latches. This is a similar problem, but not exactly the same, as the locking problem. They share the symptom of blocking threads and creating idle CPU cycles, by keeping a sync lock on a shared resource that multiple threads want to work on. This issue, though, moves beyond user transactions and has to work at the source-code level to guarantee synchronization. Without this, the program itself would tip over, values would be corrupted in memory, bad things would happen.

Dr. Dewitt could not explain the ‘how’ here in depth. I would take him at his word that it’s probably too complex for his talk. The important idea that I took away is that there now exist techniques that still guarantee code synchronization for parallel work but do not use latches as the implementation of that code synchronization. Whatever the alternate implementation is, it’s dramatically more efficient and solves this problem in a new way. I think the proof of this did show in the performance numbers in the presentation. I would love to know more, but I take him at his word that this works.

It’s not Just Faster Machinery

The third thing I really appreciated in the talk is the constant mention of CPU instructions per user transaction, instead of time per user transaction. This orientation to saving instructions per transaction is the key to faster systems in the future, as the number of cores in processors increases but their speed doesn’t. I think the emphasis on making the processing of in-memory data more CPU efficient is fantastic.

Someone finally had the insight to see that where burning CPU cycles on existing machines, when we were typically disk bound, was no big deal, but now, when there are no cycles to spare because we are not waiting for the disk, it’s become strategically important to make fewer cycles process more data. And Hekaton does that. With the combination of the two locking and latching architecture changes above plus native machine-code compilation to eliminate the overhead of interpreted code, the same CPUs can do more work in fewer instructions.

The reason I am excited about this is that it will bring big performance gains to little guys like me running a 2-socket machine. This is not something that just pulls together a lot of fancy, expensive machinery (infiniband, PCI flash, scale-out, offloading, etc.) – this is a new way of implementing the query processor that will give huge performance advantages, perhaps even running on the exact same hardware.

Published Monday, October 21, 2013 3:47 PM by merrillaldrich

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Greg Linwood said:

I agree with you that this is a revolutionary change & it is fantastic to see such a huge innovation coming from Microsoft after so many years with the core SQL engine basically working on the same design.

Interop with the existing query processor is also a wonderful idea, making it easier to transition to the technology in a piecemeal approach.

The language limitations will be an obstacle for many, particularly with native compilation but hopefully we'll see some of these language limitations disappear asap with future releases

David delivered the presentation very effectively, as you've pointed out with technical depth at exactly the right level for a PASS keynote. I watched it from my home in Melbourne, Australia around midnight on PASStv which was very nicely done by PASS.

October 22, 2013 4:27 AM
 

Meher said:

Very nicely summed up Merrill. Thanks for posting this.

October 22, 2013 8:43 AM
 

dd said:

Without the how, this is meaningless marketing hype.

October 26, 2013 7:29 PM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement