In this post I’ll look at an actual deadlock, then troubleshoot it using the steps I described in Deadlock Troubleshooting, Part 1 so you can see them in action. This is a simplified version of a deadlock scenario that an internal customer here at Microsoft called us for help with. To set up the scenario, run this:
-- Batch #1
CREATE DATABASE deadlocktest
SET NOCOUNT ON
DBCC TRACEON (1222, -1)
IF OBJECT_ID ('t1') IS NOT NULL DROP TABLE t1
IF OBJECT_ID ('p1') IS NOT NULL DROP PROC p1
IF OBJECT_ID ('p2') IS NOT NULL DROP PROC p2
CREATE TABLE t1 (c1 int, c2 int, c3 int, c4 char(5000))
DECLARE @x int
SET @x = 1
WHILE (@x <= 1000) BEGIN
INSERT INTO t1 VALUES (@x*2, @x*2, @x*2, @x*2)
SET @x = @x + 1
CREATE CLUSTERED INDEX cidx ON t1 (c1)
CREATE NONCLUSTERED INDEX idx1 ON t1 (c2)
CREATE PROC p1 @p1 int AS
SELECT c2, c3 FROM t1 WHERE c2 BETWEEN @p1 AND @p1+1
CREATE PROC p2 @p1 int AS
UPDATE t1 SET c2 = c2+1 WHERE c1 = @p1
UPDATE t1 SET c2 = c2-1 WHERE c1 = @p1
Now, run this from another connection:
-- Batch #2
EXEC p2 4
Finally, leave that one running while you run this from a third connection:
-- Batch #3
CREATE TABLE #t1 (c2 int, c3 int)
WHILE (1=1) BEGIN
INSERT INTO #t1 EXEC p1 4
TRUNCATE TABLE #t1
This will cause a deadlock; you should see one of the batches aborted by a 1205 error. Now that we have a reproducible deadlock, I’ll follow the troubleshooting steps that I posted in Deadlock Troubleshooting, Part 1.
At this point, if you re-run Batch 2 and Batch 3, you’ll find that the deadlock has been solved. You didn’t even have to use steps 5-8 or the list of other deadlock avoidance strategies that I listed in Part 1 of this series of posts.
In a subsequent post I'll look at the details of the query plans involved in this particular deadlock to understand what caused the deadlock and why DTA's proposed index fixed it.
(This post series is continued in Deadlock Troubleshooting, Part 3.)
PingBack from http://blogs.msdn.com/bartd/archive/2006/09/25/deadlock-troubleshooting-part-3.aspx
Bart Sir, You are great, and I also worked for Microsoft in old willows building 10525.This was the great help for every Sql lover guys.
Thanks and Keep Posted!!
(Sorry if I seem to have posted this twice - forgot to log in the first time!)
This is a really useful post. One day the documentation that ships with the SQL server product will be equally clear...
You mention the option of forcing one of the transactions to block early on the process, by doing something like:
SELECT @x = COUNT(*) FROM t1 WITH (HOLDLOCK, UPDLOCK) WHERE c1 = @p1
Would this in general:
a) lock a single set of rows (i.e. the ones where C1=@p1) in the index cidx?
b) effectively place a index wide lock?
If it is a), then the issue I have is that the two processes are independent, (different users logged into different clients!) so the runtime values of @p1 for the two procedures are more than likely to be completely different - in fact the values apply to different columns in the table, so there is no relationship between the parameters for the procedures at all.
In addition, the count(*) is likely to be expensive in some cases - what if there are millions of rows?. If the locking is an index wide lock, would this lock still be held, even if the value for @p1 was chosen such that there we no rows at all where c1=@p1?
These may be silly questions !
This is such black art stuff.
First, apologies for the delayed response. If there is an index on column [c1], then SQL will use a very selective index seek and only the rows where [c1]=@p1. But if there is no index that has [c1] as the leading column, SQL will need to scan the table or an index, and it will lock (and hold the lock on) every row that it scans.
DROP TABLE t1;
CREATE TABLE t1 (c1 int PRIMARY KEY, c2 int);
INSERT INTO t1 SELECT [object_id], [object_id] FROM master.sys.objects;
SELECT OBJECT_ID ('t1')
SELECT COUNT(*) FROM t1 WITH (ROWLOCK, UPDLOCK, HOLDLOCK) WHERE c2 = 5;
SELECT COUNT(*) FROM t1 WITH (ROWLOCK, UPDLOCK, HOLDLOCK) WHERE c1 = 5;
Thanks, man! This is a great testing tool to test your deadlock detection setup. Thanks for sharing.