About three weeks ago, Kimberly approached me because she was seeking for a way to find a plan that’s in the cache that’s been invalidated such that the next execution will be recompiled.

One example script to reproduce the conditions would include these steps:

 

  1. Create a stored procedure
  2. Run it and it doesn’t use an index
  3. Create an index <<<-- this INVALIDATES the proc but the plan is NOT evicted / flushed from the cache
  4. Run it again and it uses the index. The plan (watching with showplan) changes BUT the dm_exec_procedure_stats still shows the same plan_handle, etc. AND it shows 2 executes even though the second one recompiled

She also added the following information:

When a plan is truly evicted (freeproccache | flushprocindb | sp_recompile | changing DB compat mode | changing one of quite a few sp_configure server options [which actually evict ALL plans from the entire cache] | etc…) then the counts all start over again. And, that subsequent “recompile” isn’t really one at all – it’s a new compile because the plan was evicted.

But, for those that have been invalidated, then what causes the plan to stay in cache but be invalid will also be the subsequent reason for recompilation (when the subsequent execution occurs). This is what shows up in extended events for sqlserver.sql_statement_recompile AND SQL Trace.

But – what I want to know – is the list of plans CURRENTLY in the cache that are not really valid and are going to recompile……………. BEFORE THEY DO.

And she hope for… There’s got to be a flag somewhere??

Unfortunately, for years there has been that misconception around how SQL Server treats these cases. And it all comes down to understanding that when such event occurs (ie an index is created/dropped or any other part of the schema is changed) no cached plans are marked for recompilation or marked as invalid.

What actually happens is that the next user trying to execute the same query, after the index was created (or any other schema change occurred), and trying to reuse the cached plan, will be forced to recompile it because the schema of one of the range objects referenced in it has changed. And that “has schema changed?” check is made during execution and compares the version of the object when the plan was compiled (information which is stored in the CP itself) with the current version of the same object in the database.

Unfortunately, the metadata version of an object stored in a plan is not something we surface via any DMV, built-in, hidden pseudo-table or any T-SQL construct.

I mean, the object version is based on the timestamp stored in the modified column of the sysschobjs system table, but we don’t expose that value as it was persisted in the CP.

So, you could see one of the values being compared, but not the other.

Hope this information is useful.