[Updated 7/16/2014: Additional blog post on this feature from the ASP.NET team]

Since the earliest days of “classic” ASP through present day ASP.NET 4.5 Web Forms many developers have leveraged a feature called ASP.NET Session State as a means to persist short lived per user data.  This feature allows the developer to store and retrieve values for individual users as they navigate around a web application. The session data is automatically persisted and retrieved from a backing store and eventually expires. 

The Problem

They are alternatives to using Session State which are outside the scope of this blog.  For applications that require Session State there are also pitfalls the most prevalent being exclusive access to session data on a per-user per-request basis.  This exclusive access is a means of maintaining the consistency of Session State and was implemented this way by design.  If you are interested the gory details of this design they explained here in the section entitled “Locking Session State Data”.  Session State is most common with ASP.NET Web Forms applications and to a lesser degree ASP.NET MVC via TempData (POST data to GET for example). 

Web applications that are primarily server side in nature work well with Session State.  By contrast client script heavy web applications where many concurrent requests are made to resources using Session State will quickly find that the pessimistic locking nature of Session State becomes a bottleneck.  Regardless with either type of web application often the next bottleneck becomes the durable backing store that holds session data.  Three are ways to optimize the access of Session States such as marking some requests as not needing session or read-only but ultimately we have bottlenecks once the load on the application grows.

The Situation Today

Despite these considerations the usage of ASP.NET Session State remains common even today.  Across the field I continue to see many customers with large external facing web applications that use Session State.  For larger enterprise customers internally facing ASP.NET Web Forms apps are even more common.  For these customers selection of the Session State Store Provider is critical.  The provider essentially takes the contents of the Session item dictionary to and from a durable backing store via serialization and deserialization (usually to a binary BLOB).  There are many providers available from both Microsoft and third parties.  Today Microsoft provides the following Session Store providers assuming on-premises deployment for ASP.NET applications:

Session Provider

Can be Highly Available?

Can be Geo Redundant?

Can be used in Web Farms?

Performance?

In-Proc

No

No

No

Excellent

State Server

No

No

Yes

Good

SQL Server (Traditional)

Yes

Yes

Yes

Fair

AppFabric Caching

Yes

No

Yes

Good

SQL Server (In-Memory)

Yes*

Yes

Yes

Excellent

* Requires in-memory tables to be marked as durable for schema and data
 
If your application requires Session State to be highly available as well as support deployment across a web farm your choices for providers from Microsoft were limited to SQL Server or AppFabric Caching.  SQL Server has an added advantage in its ability to span datacenters to provide geo-redundancy while AppFabric is limited to a single datacenter.  In practice both of these solutions work well but often the traditional SQL Server implementation hits a bottleneck due to the contention on a single disk based table.  The result of this contention is blocking, deadlocks, and other unpleasant behavior that impacts the time it takes to store and retrieve session.  In addition there can be issues when older session data is purged due to lock escalation and latch contention from the delete operations.
 

A New Option with SQL Server 2014

 
To address performance issues with the older SQL Server provider, the SQL Server team recently released a new provider “Microsoft ASP.NET Session State provider for SQL Sever In-Memory” as a NugGet package.  Proof of the incredible magnitude of performance improvements for this provider are available in this case study detailing an ASP.NET application using Session State that handles 250,000 requests per second!  The new implementation makes use of the memory optimized tables feature of SQL Server 2014 a.k.a. “Hekaton” and as such requires the 2014 version of the product.  So how does this provider improve upon the older SQL Server session state provider?
 
  1. Session storage is persisted via memory optimized tables versus disk based tables.  Memory-optimized-tables are fully transactional, durable and ideal for heavy access patterns such as in the case of storing session state.  These type of tables use latch-free data structures and optimistic, multi-version concurrency control.
  2. To further improve performance natively compiled stored procedures are used to retrieve and store session data.  Essentially these are a new type of stored procedure that are compiled down to native machine code.

Both of these SQL 2014 product features address the key performance and contention issues that exist with the disk based implementation for the older traditional SQL Server provider.  Installation and configuration of this provider is relatively straightforward.  Using the NuGet package manager console the provider can be installed via:

Install-Package Microsoft.Web.SessionState.SqlInMemory.  

Within your application this NuGet package will add an assembly reference to Microsoft.Web.SessionState.SqlInMemory as well as add a script file to setup the SQL Server 2014 Session State database called ASPStateInMemory.sql that contains necessary DDL to setup the database.  There are a several items in the SQL script you will want to review and most likely review and/or modify:

  1. The name of the database which is ASPStateInMemory by default.
  2. Path to the database PRIMARY file group.
  3. Path to the MEMORY_OPTIMIZED_DATA file group for the database.
  4. Sizing information for the BUCKET_COUNT based on the expected size of items in session.
  5. Decision to make the session tables durable or non-durable (related to whether you need session to be highly available or not)

Part 5 above will require some analysis on the part of your existing SQL Server session database and may be as simple as computing the average DATALENGTH() for the BLOB column in the traditional ASP.NET SQL Server session provider schema.  For InProc or StateServer usage determining the average size of the session items is more difficult but could be accomplished by capturing memory dumps of the w3wp.exe or StateServer process and reviewing the size and number of items in the SessionState dictionary.  There are also perfmon counters for InProc and StateServer as to the number of items in session.  As always the best advice is always to test and tune.

Making In-Memory Session Highly Available

By default the SQL Server 2014 In-Memory Session Provider memory optimized tables are marked as non-durable.  This means that while changes to data in these tables are transitionally consistent, the changes are not logged meaning that if the SQL Server is restarted, the server is rebooted, OR any form of failover occurs (FCI or AlwaysOn) then all session data is lost.  The reason for this default is performance.  To make these memory optimized tables durable three changes are required to be made within the ASPStateInMemory.sql script.  There are comments within the script that explain why these changes need to be made.

  1. Modify the SessionItems table as follows. 
    1. Change WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)
    2. To WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_AND_DATA)
    3. Uncomment the statement (note the comma at the end): Id bigint IDENTITY,
    4. Uncomment the statement (note the comma at the end and change the 1000000 * 2 to a real scalar value based on your needs just read the T-SQL comments that precede this statement to pick a starting value): CONSTRAINT [PK_SessionItems_Id] PRIMARY KEY NONCLUSTERED HASH (Id) WITH (BUCKET_COUNT = 2000000),
  2. Modify the Sessions table
    1. Change WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY
    2. To WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_AND_DATA)

Once these changes are made we can make this database part of a SQL Server AlwaysOn availability group and the session data will follow us during a failover.  As an added bonus retry logic exists within the provider so that when either an automatic or manual failover occurs the stale connections in the connection pool won’t result exceptions being thrown to the end user. 

Please note that even if we leave the tables as non-durable you could put your session database into a SQL Server AlwaysOn availability group but the data in the session tables won’t be available in the replica (only the schema will be available).  For some customer workloads this “schema only” replication model is good enough to warrant the performance boost of using non-durable memory optimized tables.

The simplest highly available topology providing the most bang for the buck for SQL Server In-Memory would resemble the following:

  1. SQL Server 2014 Node 1 on subnet (datacenter) A
  2. SQL Server 2014 Node 2 on subnet (datacenter) B
  3. File share Witness on subnet (datacenter) C

image

This topology would provide geo-redundancy, automatic failover, and sustain the complete loss of connectivity to one of three data centers.  With the dynamic quorum feature of Windows Server 2012 R2 it’s even possible to sustain the loss of connectivity to two data centers automatically (last man standing scenario).

ASP.NET Configuration

On the ASP.NET web application side to enable the new provider a simple web.config edit is all that is needed.

<sessionState mode="Custom" customProvider="SqlInMemoryProvider">
  <providers>
    <add name="SqlInMemoryProvider"
         type="Microsoft.Web.SessionState.SqlInMemoryProvider"
         connectionString="Data Source=AGAspNet; Initial Catalog=ASPStateInMemory;Integrated Security=True;" />
  </providers>
</sessionState>

In the snippet above the ‘AGAspNet’ is the SQL Server 2014 AlwaysOn availability group listener name.

A Quick Demonstration

Using the out-of-the-box ASP.NET Web Forms 4.5 application and writing a simple string with a timestamp to Session produces the following data within SQL Server 2014:

image

image

Note the location of our AspStateInMemory database on SQLNode1-2014.  Next we manually failover the availability group.

image

Our sessions are now available on SQLNode2-2014 without interruption to our ASP.NET applications.  Simply hitting F5 in the web application retrieves the data from session without any exception being thrown to the client.

image

What About Expired Sessions?

In the older SQL Server Session Provider a SQL Agent job was created to delete expired sessions.  With the new provider a stored procedure is supplied [dbo].[DeleteExpiredSessions] that must be called by a job.  By default this provider assumes a 20 minute session timeout.  Each time a session item is accessed the timeout is reset keeping the user’s session “alive”.

In Summary

There are many interesting details contained within this new Session State provider and I encourage you to dig into the code for yourselves as you will find it a wonderful learning experience about the capabilities and restrictions for the In-Memory OLTP ‘Hekaton’ feature of SQL Server 2014.  One particularly clever feature contained in the code resolves around ability to simulate BLOB storage in memory.  Memory optimized tables at present do not support BLOB types.  Well what is a serialized session dictionary other than a potentially large BLOB?  The sprocs used by the provider work to split a serialized session into 7000 byte chunks to enable storage of large session items. 

The astute reader may have notice that in my screen shots there were no rows in the [SessionItems] table but rather a single row in the [Sessions] table. If my session contents had exceed 7000 bytes you would have seen “spill over” rows in the [SessionItems] table.  I’ll delve into this further hopefully in a future post as this approach has many other potential applications outside of ASP.NET session storage.

The natively compiled stored procedures are also worth a look as there are clever means to dealing with restrictions such as the lack of CASE statement support within a natively compile stored procedure.  This restriction is due to the fact that once the sproc is compiled to native code branching is not allowed!

Key considerations and questions to think about if you are thinking of using this new provider:

  1. Memory-optimized tables are backed by memory!  Do your SQL Servers have enough memory to contain all your session data during peak load?
  2. By default the tables are non-durable.  Consider your high availability requirements carefully.  Either way durable or non-durable the performance will easily exceed what you may have today with the traditional SQL Server session provider and schema.
  3. Read the comments in the SQL file and tune the BUCKET_COUNT for the nonclustered HASH indexes. A snippet from the DDL statement for the SessionItems table.

    image

Good luck and please share your experiences using this new provider in the comments!

-Ken