My name is Harsh Deep Singh, and I am a SQL Server Premier Field Engineer with Microsoft. I've been working on SQL Server for quite some time, and the intention behind starting this blog is to try and share my knowledge with the SQL Server community. Any feedback or suggestions on the blog are both welcome and solicited. Please feel free to drop me a line on email@example.com anytime.
Disclaimer: The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion. Feel free to challenge me, disagree with me, or tell me I’m completely nuts in the comments section of each blog entry, but I reserve the right to delete any comment for any reason whatsoever (abusive, profane, rude, or anonymous comments) – so keep it polite, please.
Before you read this, I would strongly recommend reading Part 1 first.
The memory architecture evolved in a big way from SQL 2000 to 2005. Basically, in 2000, all we had was the procedure cache (used to cache compiled plans and execution plans, execution contexts, etc.) and the buffer pool. However, in 2005, with the increase in the variety of memory consumers, and the addition of new features, the number of caches increased dramatically. In this part of the memory series, I'll try to do a deep dive into the caching mechanism of SQL server.
Common Caching Framework:
SQL Server implements a common caching framework. The highlight of this framework is its inbuilt ability for both lifetime and visibility management of entries. Both lifetime and visibility are controlled by a "Clock Algorithm".
Under this algorithm, there are "Clock Hands" that sweep the cache at regular intervals. Every time the clock hand steps on a not-in-use entry, it decreases cost by some amount. If the entry is not in use and its cost is zero, the clock hand makes the entry invisible and then attempts to remove it from the cache. In fact, Lifetime of an entry is managed by an embedded reference count in the Clock Entry Info class. After this count goes to 0, an entry is destroyed.
There are 2 types of clock hands, internal and external. An external clock hand is moved by the resource monitor (RM) when the whole process gets into memory pressure. The internal clock hand is used to control the size of a cache relative to other caches. You can think of the internal clock hand as a way to put a max cap on a single cache. If this mechanism didn’t exist, it would be possible for a single cache to push the whole process into different types of memory pressure. To avoid this type of situation, the internal clock hand starts moving after the framework predicts that the procedure cache’s max cap is reached.
Visibility of an entry is implemented by a pin count embedded in the Clock Entry Info class. Keep in mind that pin count and reference count are different mechanisms. Reference count manages lifetime, and pin count manages visibility. For an entry to be visible, its pin count needs to be visible and have a value larger than 0. It also needs to be non-dirty and not marked for single usage. A pin count of 1 means that the entry is visible and is currently not in use.
The procedure cache is, in most cases, the biggest consumer of SQL Server memory after the buffer pool. In this section, I'll seek to discuss the architecture and working of the Procedure cache in detail.
The main types of objects that can be stored in the procedure cache are described as follows:
When you send a query to SQL Server, the batch is parsed and sent to the Algebrizer, which produces an Algebrizer tree. The query optimizer uses the Algebrizer tree as input, and produces a compiled plan as output. Finally, in order to execute the compiled plan, an execution context must be created to track runtime state.
The caching infrastructure (of which Procedure cache is a part) exposes objects called cachestores. A cachestore provides a common set of memory allocation interfaces that are reused for many different memory consumers inside SQL Server. The procedure cache is split into several cachestores:
Stored procedures, functions, and triggers
Ad hoc and prepared queries
View, default, and constraint algebrizer trees
Extended stored procedures
The object cachestore is used to cache compiled plans and related objects for stored procedures, functions, and triggers. The SQL cachestore holds plans for ad hoc and prepared queries. The Algebrizer cachestore and Xproc Cachestores hold algebrizer trees (for views, defaults and constraints only) and extended stored procedure objects respectively. The Object and SQL Cachestores are generally much larger than the other 2 cachestores.
In each cachestore, the lookup is managed using one or more hash tables. For example, in the SQL Cachestore, each plan is assigned a unique hash value, and the plans are divided into hash buckets based on these. Each bucket holds zero or more cached compiled plans. Each compiled plan may contain cached execution contexts and cached cursors.
Multiple plans may reside in the same hash bucket, but SQL Server limits the number of entries in each cachestore in an attempt to prevent excessive plan lookup times caused by long hash chain lengths. The SQL and object cachestores are each permitted to grow to approximately 160,000 entries on 64-bit servers, and approximately 40,000 entries on most 32-bit servers.
As discussed earlier, the "Clock Algorithm" is used to delete old entries from a cachestore based on an LRU algorithm. From SQL 2005 onwards, the execution contexts are treated as part of the compiled plans, rather than being treated as separate cache objects. Every time a clock sweep passes a plan, the plan voluntarily releases half of its cached execution contexts, even if the plan itself is going to remain in the cache.
The background thread that ages plans out of the cache in SQL Server 2005 is called the resource monitor thread.
Cache Lookups and Plan Reuse:
The main purpose of cachestores (as if it wasn't obvious already) is to provide for reuse of the objects that they cache. The lookup method varies according to the object in question. For a stored procedure (CACHESTORE_OBJCP), the Database ID and the Object ID are used to look up the stored procedure plan(s).
For ad hoc and prepared queries, the text of the query is processed through a hash function. The hash function returns an integer value that is then referred to as the “object ID” of the SQL compiled plan object. SQL 2005 hashes the entire query text. SQL 2000 only hashed the first 8 KB, so there was a chance of the hash being the same for 2 long queries with, say, slightly different where clauses.
A number of things are taken into account when determining whether a plan will be reused:
If you see multiple plans in cache for what appears to be the same query, you can determine the key differences between them by comparing the sys.dm_exec_plan_attributes DMF output for the two plans. The plan attributes that must match in order for reuse to occur will have an is_cache_key column value of 1.
Flushing the Procedure Cache:
The most common method of flushing the contents of the procedure cache is to run DBCC FREEPROCCACHE. In addition, ALTER DATABASE or DROP DATABASE commands, closing a database (due to the autoclose database option), and changes to sp_configure options can all implicitly free all or portions of the procedure cache.
In the next part, we will focus on the troubleshooting aspect of memory.