In the last months I’ve been playing with Hekaton a while, since a customer of ours is evaluating its usage for one of its flagship products.
Here’s I’d like to share with you what I’ve found doing this test. I think it is really interesting for two main aspect
- they show that Hekaton is not magic and still good best practices and good hardware must be in place
- Hekaton can be really fast, competing – in performances – with some NoSQL databases (but I’ll discuss about this in another post)
For the test I’ve used a VM running on VMWare Workstation, with the guest having 2 vCPU and 4GB of Ram. Disk is pre-allocated vDisks stored on a Samsung 840 Pro SSD.
Surely it’s not a server, but we wanted to try Hekaton on (somehow) cheap machines in order to understand which kind of performance we can have also on (somehow) cheap hardware.
The test was a simple insert of 10,000 entities with the following structure
customerId: Int
attribute1: varchar(100)
attribute2: varchar(100)
attribute3: varchar(100)
Here’s the results:

SO = Schema_Only
SAD = Schema_And_Data
NC = Native Compilation
We didn’t test the usage of Bulk Insert ‘cause we were interested in understanding how fast we can be on singleton inserts. Insert was done calling a stored procedure made of only one INSERT command.
As visible the usage of Hekaton at its full potential (SO+NC) improved the performance of near three times. Not really bad! (This kind of performance are fast as some NoSQL DB in my tests!)
Dropping the Native Compilation increased the elapsed time a little bit, but that’s the price you have to pay if you want, for example, to use SEQUENCES or other features not actually supported by Native Compiled procedures. (Using SQL Server 2014 CTP1 right now)
The real surprise was the fact that using the Schema_And_Data option for an Hekaton table basically removed all performance gains, making the difference between Hekaton tables and “classic” tables almost invisible.
I really wanted to understand why since I was not really expecting this.
Analyzing wait stats turned out that we where hitting a bottleneck at the transaction log level:
Here’s the wait stats using SO:

and here’s the SAD (by all means!) wait stats

Now, everything it’s much more clear. Putting Log and Data on the same disk is STILL not a good idea. So what are the thing we’ve learned in these tests?
Well, first of all, Hekaton can really change the game as it’s performance are really impressive and can really compete with some of the fastest NoSQL databases (CouchBase for example).As said before I’ll do a post on that in future, but before I want to do additional tests and wait for Hekaton to RTM status.
As a second point, it shows that there is no magic going around here and one can’t simple hope to solve it’s performance problems simply enabling Hekaton. A good system architecture is still needed and even good understanding of the engine is a must (as always) in order to find the bottleneck and fix the problem.
Unfortunately I wasn’t able to do additional test to show that the bottleneck can be removed due to time and hardware restrictions but I hope to be able to them in the near future so that I can update this post. In the meantime if someone wants to try to execute the same test, just drop me an email and I’ll share the very simple tool I created with you.