Wow, what a great PASS Summit! Great to see so many old friends and meet so many new ones! At PASS I had the opportunity to talk to a number of people about an exciting new feature we have in SQL Server 2014, In-Memory OLTP (codenamed "Hekaton"). We had some great discussions and got to listen to a fantastic keynote delivered by Dr. DeWitt on this technology: http://www.sqlpass.org/summit/2013/PASStv.aspx?watch=aW3-0G-SEj0
For this blog I just wanted to share some of the most common questions and discussions we had around SQL Server 2014 In-Memory OLTP at the Clinic and throughout the PASS Summit. Maybe it will help answer some questions you may have around In-Memory OLTP as well!
Question 1: Is In-Memory OLTP the solution to all my performance concerns?
In-Memory OLTP is aligned to address the common difficult bottlenecks which are experienced in high scale/low latency RDBMS systems. Problems such as latching and locking do not exists on memory-optimized tables. Placing T-SQL into Native Compiled stored procedures can help reduce the CPU time and improve the T-SQL execution time. However, there are a number of solutions which have performance issues which In-Memory OLTP cannot address, for example if the bottleneck isn't in the SQL Server engine or the code cannot be modified to take advantage of In-Memory OLTP. To help with this analysis we did create the AMR tool: http://blogs.technet.com/b/dataplatforminsider/archive/2013/09/17/new-amr-tool-simplifying-the-migration-to-in-memory-oltp.aspx . This is a great first step to investigating your solution and In-Memory OLTP. Overall, to summarize, the answer is unfortunately no, it isn’t quite that magical fix everything button.
Question 2: When the server fails, do I lose all my data?
The quick answer is NO, memory-optimized tables can still be fully durable! The tables can be created as SCHEMA_AND_DATA and will log the inserts/updates/deletes into the SQL Server database transaction log (and then the data files). SQL Server will write changes to the transaction log and execute recovery through checkpoint files by reading the data into memory (from disk) on recovery. Memory-optimized tables are supported with all 3 recovery models for SQL Server databases (full, bulk-logged and simple). Also of note, the logging of transactions going against In-Memory OLTP objects allows for integration with AlwaysOn Availability Groups so you can take advantage of HA/DR features as well with memory-optimized tables. For further details on how transactional durability is achieved see: http://blogs.technet.com/b/dataplatforminsider/archive/2013/10/11/in-memory-oltp-how-durability-is-achieved-for-memory-optimized-tables.aspx.
Question 3: Do you support compression and/or encryption on the memory optimized tables?
No, these features currently are not supported. If you have specific requirements I would love to know more. Another consideration in particular around compression is, in many cases the data-set you should have in your memory-optimized tables is the very 'hot' OLTP data. Typically this subset of the data may be quite small compared to the overall data residing in the database. You can still use compression features (e.g. page compression or a clustered columnstore index) on standard disk-based SQL Server tables in the same database and move the data from memory-optimized tables into these compressed tables via simple insert..select T-SQL calls.
Question 4: What are some good and bad “scenarios” or architectures to consider In-Memory OLTP?
This could be a long answer and unfortunately there is some “it-depends” but let me suggest a few guidelines to follow when looking at your application and In-Memory OLTP:
I hope this was helpful and informative. Here are a few more pointers to deep-dive into the technology:
If you have more questions or feedback on In-Memory OLTP in SQL Server 2014 please let me know via this blog or find me on Twitter: @MikeW_SQLCAT
Any idea when SQL 2014 will RTM? I'm looking forward to upgrading from SQL 2008.