Hello folks, welcome to a post focusing on one of the future features of SQL Server. SQL Server 2012 is barely a year old, yet there is already talk on a feature that is to ship in the next major release of SQL Server. The current name of this feature is Hekaton, which is Greek for 100. I believe the reason for this name is because of the performance boost that one online gaming company experienced when working with Microsoft in implementing the technology preview of Hekaton.
Hekaton is a new, in memory technology and the information was first shared at PASS 2012. Hekaton has been developed through a collaborative effort between Microsoft Research and the SQL Server product team. It will be integrated into SQL Server as part of the xVelocity in memory suite of features. When I was at SQL Bits 2012, the ColumnStore Index was known as exactly that; the ColumnStore Index, as opposed to xVelocity, so I think it’s just a name that the marketing chaps at Microsoft have come up with. Hekaton delivers noticeable performance increase (between 10-30x) without code changes or hardware upgrade, and for me, that’s a real big win. There’s nothing worse than reading about some exciting new feature only to realize that it’s going to require some major code re-work which makes it so much harder to convince the Management that it’s worth the hassle to upgrade. The Hekaton team consists of Justin Levandoski, Sudipta Sengupta, Paul Larson, David Lomet, Mike Zwilling, Adrian Birka, and Cristian Diaconu.
Hekaton has introduced accelerated performance by introducing two improvements: a Multi Version Concurrency Control, and a new Indexing System called the bw-tree. In this post I will focus on the Multi Version Concurrency Control (a future post will focus on the second part).
When tables are stored in memory, the data structure is very different from the data structure on a disk. The challenge for the team was to come up with a design for in memory database engine that delivered performance improvements and made use of multicore systems that would scale. The solution they found was to implement a latch free technique for concurrency control. Latches are used internally to synchronize access to the in memory data structure. This prevents data corruption when users try to update the data structure concurrently. Clearly this can cause bottleneck issues for databases that are require high performance and are on multicore processor servers.
Although a latch free technique is harder to develop, once a robust design has been implemented it can provide high performance and still ensure data integrity. Hekaton implements an Optimistic Multi Version Concurrency Control (Multi Version Concurrency Control). An MVCC differs from the Single Version Concurrency Control (SVCC)in that SVCC locks and over-writes the current data instead of versioning the data. This approach does not provide the scalability required for high performance systems. however an MVCC updates the data by marking the older version as obsolete and then adding a newer version. This means that there are multiple versions of the data but only one of them is the latest, thereby isolating read-only from updates.
There are two types of MVCC; Optimistic and Pessimistic.
- Optimistic MVCC relies on a two step type validation;
- read validation: we need to be sure that the version read at the beginning of a transaction is the same version that is available at the end of a transaction.
- phantom avoidance; there my have been a transaction is not part of the original scan that is now visible to the transaction, an therefore may return a new version.
- Pessimistic MVCC relies on read locks and wait for dependencies, though not necessarily a single read lock but rather a “many-readers-single-writers” lock. This allows for many transactions to read a version, whilst an update can only occur once all reads of a version have been completed. A new version can be created via eager updates or deletes (a new version inserted but not pre-committed, so that it is not yet visible). This would reduce the wait before a version can be updated, and then once all read locks and wait for dependencies have been released then a version can be updated, and read locks can occur again. Validation is not required as locks take care of this. however there is an overhead with the wait for dependencies (these occur where there are readlocks). A wait for dependance is created when a write lock is required on a version that has read locks.
MVCC’s do have a greater over head than a SVCC, however the real big gain of implementing a MVCC over a SVCC is when running long read transactions on an OLTP database. This scenario is not too uncommon when running reporting queries on production databases. An optimistic MVCC design also performs better when factoring in NUMA and hyper-threading than either a pessimistic MVCC or a SVCC.
More info on how Concurrency Controls work can be found in this paper here: http://vldb.org/pvldb/vol5/p298_per-akelarson_vldb2012.pdf —by Larson; Spyros Blanas and Jignesh M. Patel of the University of Wisconsin-Madison; Diacon;, Freedman; and Zwilling.
Hekaton seems like one of those features that is genuinely exciting an will provide a real boost in the OLTP world of databases. For me the real big win is going to be on the long requests when running reports. I get many complaints from users who have to wait so long for their reports to be created (yes, we create reports during the day on the fly, and no I don’t like it), so with Hekaton we reduce our waiting times not only on those reports, but also on the whole responsiveness of the database as a whole, and without having to change one line of code. Good stuff.