As of September 1, 2013 we decided to remove SQLCAT.COM site and use MSDN as the primary vehicle to post new SQL Server content. This was done to minimize reader confusion and to streamline content publication. MSDN SQLCAT blogs already includes most SQLCAT.COM Content and will continue to be updated with more SQLCAT learnings.
To make the transition to MSDN smoother, we are in the process of reposting a few of SQLCAT.COM’s blogs that are still being very actively viewed. Following is reposting of one of the SQLCAT.Com blogs that still draws huge interest.
Also, Follow us on Twitter as we normally use our Twitter handles @SQLCAT and @WinAzureCAT to announce new news.
Thank you for your interest.
Writer: Steve HowardTechnical Reviewers: BoB Taylor, Jay Askew, David Levy, James Podgorski, Thomas Kejser, Shaun Tinline-Jones, Stuart Ozer, Lubor Kollar, Campbell Fraser, Kun Cheng, Venkata Raj PochirajuPublished: October 2011Applies to: Microsoft SQL Server 2005, Microsoft SQL Server 2008, Microsoft SQL Server 2008 R2, Microsoft SQL Server code-named “Denali,” SQL Azure
· OR logic in the WHERE clause In this pattern, the condition on each side of the OR operator in the WHERE or JOIN clause evaluates different tables. This can be resolved by use of a UNION operator instead of the OR operator in the WHERE or JOIN clause.
· Aggregations in intermediate results sets This pattern has joins on aggregated data sets, which can result in poor performance. This can be resolved by placing the aggregated intermediate result sets in temporary tables.
· A large number of very complex joins This pattern has a large number of joins, especially joins on ranges, which can result in poor performance because of progressively degrading estimates of cardinality. This can be resolved by breaking down the query and using temporary tables.
· A CASE clause in the WHERE or JOIN clause This pattern has CASE operators in the WHERE or JOIN clauses, which cause poor estimates of cardinality. This can be resolved by breaking down the cases into separate queries and using the Transact-SQL IF statement to direct the flow for the conditions.
· WHERE a.col1 = @val1 OR a.col1 = @val2 … This pattern is just another way of writing WHERE col1 IN (@val1, @val2 …). Testing this pattern did not produce poor query plans. However, note that each value in parentheses requires SQL Server to navigate from the top of an index B tree to the bottom, increasing the number of logical reads by the depth of the index multiplied by the number of values in parentheses.
The key to this pattern is that the same column is being evaluated by both sides of the OR operator. In the example, this column is col1. If an index exists with col1 as its first column, a seek operation on this index can satisfy each condition. If the index is small and does not cover the query, you might get a scan. The same guidelines apply to covering indexes with this pattern as apply to other queries.
· WHERE a.col1 = @val1 OR a.col2 = @val2 … In this query pattern, two columns in the same table are evaluated by the two sides of the OR operator. If two indexes exist on this table—one with col1 as its leading column and the other with col2 as its leading column—then SQL Server will run a scan if the tables are small. However, when the data set gets large enough, SQL Server will use an index union to retrieve the rows. (See Figure 1.) When SQL Server is able to use this index union, this query pattern is not a problem.
Figure 1: In this query plan, the pattern of WHERE a.col1 = @val1 OR a.col2 = @val2 is used on a large table. The query plan used involves an index union to retrieve the rows meeting either requirement. The Stream Aggregate operator eliminates any duplicates. The Key Lookup operator is required after elimination of duplicates to retrieve columns in rows where ManagerID = 10, but the columns are not included in the forTest index.
· WHERE a.col1 = @val1 OR a.col2 IN (SELECT col2 FROM tab2) By analyzing this pattern, you can see that SQL Server rewrote the IN clause and performed a join to tab2, as expected. Different indexes are used to retrieve the rows that fit either condition, and the results are sorted to eliminate duplicates at the end. (See Figure 2.) This is an efficient plan; applying the techniques discussed in this paper does not result in a better query plan.
Figure 2: This plan uses a variation of the pattern WHERE a.col1 = @val1 OR a.col2 IN (SELECT col2 FROM tab2). The retrieval from the dbo.Employee table (which would be ‘a’ in the pattern) is performed with two seeks. The smallest table is scanned because there is no filter in the IN clause. The plan produced is not problematic.
· WHERE a.col1 = @val1 OR b.col2 = @val2 The pattern represented by this query predicate is problematic. Breaking down the query into two steps produces a significantly “cheaper” plan. Note that in this pattern, there are two tables involved; on each side of the OR operator is a condition that applies to different tables. This problematic condition is detailed, from setup to execution, in Appendix A: Example of a Problematic Use of OR.
Following is the example query described in Appendix A, which causes a problematic query plan.
The first script uses local variables to set up the range
and uses the OR logic in the WHERE clause
Table 1. Indexes for Adventureworks.Person.Contact
An index exists on Adventureworks.Person.Contact (EmailAddress) that should be able to retrieve the three email addresses requested in the query. The primary key on dbo.Employee in the example tables is on EmployeeId, and the supporting index is clustered. SQL Server should be able to retrieve the results from the two tables with two inexpensive index seeks. However, executing the query results in an execution plan in which SQL Server scans the clustered index on both tables involved in the join. (See Figure 4.)
Figure 4: In this query pattern, SQL Server scans the clustered index on both tables involved in the join.
When STATISTCS IO and STATISTCS TIME are set to ON, as they are in the example in Appendix A, the query plan results in the following output.
The result is a total of 7,532 page reads and 94 milliseconds of CPU time. This may not be noticeable in a small system in which the plan is not executed frequently. However, as tables grow in size or as a query grows in complexity, this type of query can become a drain on the system because both the number of page reads and the CPU time will increase as the data size increases.
To reduce the cost, you can break down the query into separate queries that SQL Server can interpret more easily; you can rewrite the query, breaking the WHERE clause predicates into conditions that apply to the tables individually.
For a simple query such as that in the example, it is easy to see that the end result set is a union of results that meet the condition of a.EmployeeId between 100 and 200 and the rows that have the three email addresses you are searching for, as shown below.
Though functionally equivalent to the original query, SQL Server now handles this query differently. In this query plan, a UNION is used instead of an OR condition. By using the UNION, SQL Server is able to perform seeks on all indexes, reducing the overall query cost. (See Figure 5.)
Figure 5: In this query plan, a UNION is used instead of an OR condition. By using the UNION, SQL Server is able to perform seeks on all indexes, and the overall query cost has been reduced dramatically.
Although it required a little more typing, using the UNION in place of the OR reduced the number of page reads to 26 and the CPU time to less than one millisecond.
1. Copy the setup queries from Appendix C into a query window, and execute them.
2. Copy Query 1 and Query 2 from Appendix D into separate query windows pointing to the database you used when you ran the setup in Appendix C.
3. In each window from step 2, run SET STATISTICS IO ON and SET STATISTICS TIME ON.
4. Click Include Actual Execution Plan at the top of the page.
Figure 6: As joins are made between aggregated intermediate result sets, the "Estimated Number of Rows" diverges farther from the "Actual Number of Rows," as you can see in the tool tip that appears in the execution plan.
Figure 7: SQL Server underestimates the number of rows at this stage in the execution plan, leading to the decision to use a nested loop join strategy for the next left outer Join.
Figure 8: A wider view of the part of the query plan shows that the table scan (at the bottom) will be performed 334 times rather than the one time that the estimated number of rows would indicate. This type of underestimation with a nested loop join can cause the query cost to skyrocket.
Figure 9: This plan breaks down the query by using a temporary table, SQL Server is able to compute statistics on the intermediate result set.
Table 2. Improvement on the demo data by breaking the query into two queries and using a temp table
· A query with a large number of very complex joins With each subsequent join, the original data set becomes farther removed from the initial data set, and more factors are introduced into the estimation of cardinality. Data anomalies within a single table can cause a skewed cost estimate that is then multiplied across every subsequent join.
In queries with large numbers of joins, it might be better to break down the query into multiple queries, bringing intermediate result sets into temporary tables. You can then perform subsequent operations or joins on the intermediate sets in temporary tables.
· A CASE clause in the WHERE or JOIN clause This case refers specifically to a clause such as WHERE col1 = CASE @val1 WHEN 1 THEN ‘val1’ WHEN 2 THEN ‘val2’ WHEN 3 THEN col1. Such clauses create situations in which the number of rows is difficult, if not impossible, to estimate. Any time the number of rows can be badly misestimated, a poor query plan can result.
Instead of trying to include such logic in a query, use the Transact-SQL conditional statements IF and ELSE IF to break the conditions into multiple statements that can cover the different possible values for @val1. Breaking down the query in this way leads to a more accurate estimation of results.
1. Ensure that the Adventureworks demo database is installed on SQL Server.
2. Run the following script to set up the other tables that are needed.
· SET STATISTICS IO (Transact-SQL) http://msdn.microsoft.com/en-us/library/ms184361.aspx?lc=1033
· SET STATISTICS TIME (Transact-SQL) http://msdn.microsoft.com/en-us/library/ms190287.aspx
· Table and Index Organization http://msdn.microsoft.com/en-us/library/ms189051.aspx
· DBCC SHOW_STATISTICS (Transact-SQL) http://msdn.microsoft.com/en-us/library/ms174384.aspx
· Query Tuning Recommendations http://msdn.microsoft.com/en-us/library/ms188722.aspx
· Nested Loop Joins http://msdn.microsoft.com/en-us/library/ms191318(v=SQL.105).aspx
Thanks for this wonderful post Steve !!!!
Do we have any post related to query execution plan basic understanding?
Just want to echo the points made here with one of my examples where I had tune a query which is part of data loading process and was taking around 7.5 minutes to complete. Though 7.5 minutes doesn't seem very long when it comes to a data loading process, that was unacceptable. What it main does is pull data from a linked server and join with some local table. When I looked at this query plan, it used nested loop join and the estimate row number from the remote query was 1 which is kind of like anti-patter 2 here. So I forced it to use hash join, the execution was down to 2.5 minutes. Then I took one step further, I used to temp table to store the results from the remote query. The execution time was drastically reduced to a few seconds. I also tried CTE, as mentioned here, it didn't produce any better result.
If we could have the support for NOEXPAND hint for CTE and Subqueries, in many cases we could avoid to break the query in two (or more) parts manually.
there is an ISV application called Relativity that is used in the legal industry for document discovery. For whatever reason, the developers became fully locked in to generating SQL expressions in the manner that the SQL Server query optimizer cannot produce a good execution plan, when alternative expression produce very good execution plans. So it is good collection of what SQL can really hurt you.
You can also check tutorial link below. Here you check check how to make complex queries easily and can know how to optimize your code.
If interested in SQL complex queries then check video below. With this you could optimize your code and SP. Also can build same result without using single line of c#