Conor Cunningham’s blog on SQL Server, data-driven applications, and pretty much whatever other random stuff he decides to post.
As Microsoft is about to release a new version of SQL Server, I would like to spend some time on upgrades today. We are constantly looking at our product offering to decide what kinds of features to build in the future, and so your input is always interesting. I can't comment on future versions of the product, of course, but I can post up some interesting best practices or lessons learned.
I am mostly interested in upgrades with respect to plan quality (ie plan changes on upgrades). The kinds of things I'd like to know are:
* have you had an upgrade experience where plan changes caused a performance issue in your upgraded application? If so, tell me a bit about your application (example: We have a banking application on SQL 2000->2005 that has lots of small transactions that runs on a <basic hardware spec> machine, and our upgrade went well except for this one query where we used this hint to get this special plan in SQL 2000 but it gave us some other plan in SQL 2005 that wasn't as fast)
* what kinds of processes do you use before deploying a new version of SQL Server into your application. (Example: We have a test version of our product running on this other QA machine. We run a few tests on it for things that have caused us problems in the past for each new release/service pack/patch).
* Have you used the USE PLAN feature in SQL 2005 to try to control plan changes when going from one build to another? Experiences with this feature?
So, please email me at: conorc (at) microsoft.com
I'll post up anything interesting that I think would be generally useful to you all.
To be clear, I am really only asking about plan quality issues at this point, not _anything_ about upgrades.
Architect, SQL Server Core Engine Team
One of the biggest performance issues when moving from SQL 2000 is that the query optimizer in subsequent versions now evaluates duplicate sub queries multiple times rather than evaluating them once.
This causes performance issues in loads of unexpected places e.g. if use use COALESCE with a sub query, the sub query is actually evalauted twice! This also affects BETWEEN and CASE statements etc.
See the following for details:
And also on Connect Feedback ID: 336002
Thanks for your comments - I agree that this is a problem. The only clarifying comment that I'll add is that not all subqueries are evaluated this way. This is specific to the BETWEEN operation. There may be a few other places where scalar subqueries are copied early in the query processing pipeline, although I have not verified that to be true recently.
It is similar to how view expansion works in SQL Server, which is to expand each reference (like a macro in C, for those of you who are programmers). Duplicate subtree detection is not done outside of the indexed view mechanism, so this can cause some duplicate work to happen in some cases. It can also lead to better plans because it can explore more possible plan choices.