SQL Server Deadlocks Due to Character Data Type Conversion When Using SQL Server JDBC drivers

SQL Server Deadlocks Due to Character Data Type Conversion When Using SQL Server JDBC drivers

Rate This
  • Comments 3

Recently, I worked on a case with one of my clients wherein SQL Server was encountering several deadlocks when they were running Stress Tests from application. They had tried several techniques to resolve this issue including forcing an index on the query, disable lock escalations and setting the database in Snapshot Isolation level. But nothing was able to get them away from this problem!!

I thought to blog this issue as I think it might help the SQL community who may face several hardships working on deadlock issue J

A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. The typical deadlock solution is tweak in the application code or change in the schema of the table like adding index etc. With very few exceptions, deadlocks are a natural side effect of blocking, not a SQL Server bug. Typical methods you can use to resolve deadlocks include:

· Adding and dropping indexes.

· Adding an index or query hints.

· Modifying the application to access resources in a similar pattern.

· Removing activity from the transaction like triggers. By default, triggers are transactional.

· Keeping transactions as short as possible.

Following was the deadlock graph that I got from the error logs, of course by enabling Trace Flag 1222 with “DBCC TRACEON (1222, -1)” or by adding “-T1222” as a SQL startup parameter. This trace flag is a new trace flag in SQL 2005, a much improved version of the tried-and-true -T1204. If you’re running SQL 2005, you should be using 1222 instead of 1204:

<Removing unessential information for the sake of brevity>

Deadlock graph

<deadlock-list>

<deadlock victim="process1e3a8e988">

  <process-list>

   <process id="process1e3a8e988" waitresource="KEY: 11:72057594097565696 (a7025366f09f)" transactionname="implicit_transaction" lockMode="U" status="suspended" spid="66" priority="0" trancount="2" clientapp="Microsoft SQL Server JDBC Driver"...>

    <executionStack>

UPDATE TBL_ABC SET COL_A=@P0, COL_B=@P1, COL_C=@P2 WHERE COL_P=@P3 AND COL_Q=@P4 AND COL_R=@P5     </frame>

    </executionStack>

    <inputbuf>

       (@P0 nvarchar(4000),@P1 datetime2,@P2 bigint,@P3 nvarchar(4000),@P4 nvarchar(4000),@P5 int) UPDATE TBL_ABC SET COL_A=@P0, COL_B=@P1, COL_C=@P2 WHERE COL_P=@P3 AND COL_Q=@P4 AND COL_R=@P5

   </inputbuf>

   </process>

   <process id="process1e3a8f948" waitresource="KEY: 11:72057594097565696 (ba02ec09c2ae)" transactionname="implicit_transaction" trancount="2" spid="71" clientapp="Microsoft SQL Server JDBC Driver"  >

    <executionStack>

UPDATE TBL_ABC SET COL_A=@P0, COL_B=@P1, COL_C=@P2 WHERE COL_P=@P3 AND COL_Q=@P4 AND COL_R=@P5     </frame>

    </executionStack>

    < <inputbuf>

       (@P0 nvarchar(4000),@P1 datetime2,@P2 bigint,@P3 nvarchar(4000),@P4 nvarchar(4000),@P5 int) UPDATE TBL_ABC SET COL_A=@P0, COL_B=@P1, COL_C=@P2 WHERE COL_P=@P3 AND COL_Q=@P4 AND COL_R=@P5

   </inputbuf>

   </process>

  </keylock>

     <keylock hobtid="72057594097565696 (ba02ec09c2ae)"  mode="X" >

      <owner-list>

       <owner id="spid=66" mode="X"/>

      </owner-list>

      <waiter-list>

       <waiter id="spid=71" mode="U" requestType="wait"/>

      </waiter-list>

     </keylock>

    <keylock hobtid="72057594097565696 (a7025366f09f)"  mode="X" >

      <owner-list>

       <owner id="spid=71" mode="X"/>

      </owner-list>

      <waiter-list>

       <waiter id="spid=66" mode="U" requestType="wait"/>

      </waiter-list>

     </keylock>

  </resource-list>

</deadlock>

</deadlock-list>

**If you want to know how to decode deadlock graph follow a great post by Bart Duncan

Interestingly, the spids that are involved in the deadlocks are running the same SQL statements:

UPDATE TBL_ABC

SET COL_A=@P0, COL_B=@P1, COL_C=@P2

WHERE

COL_P=@P3 AND COL_Q=@P4 AND COL_R=@P5

Note: I have changed the name of the columns since this deadlock graph contained external client's information.

Based on the initial analysis on the deadlock graph the most likely causes were following with one being more likely than the other:

1. From the profiler trace I found that application was running a number of statements inside an implicit transaction and touching this table multiple times, it was increasing the chances for deadlock. Therefore, I thought of evaluating if they really need all these things to be performed inside a transaction and if not, then I thought of removing it. But because of some business reasons we couldn't interfere with the currently defined transactions!

2. The update itself was causing the deadlocks with another instance of the same update. If so, an easy fix would be to create an index on the predicate columns COL_P, COL_Q AND COL_R. This would be trial and error but basically the idea is to cause SQL to use that index for the WHERE/search part of the query; It could be tried on all three columns (this order seems the most selective), or just the first two. But we later found that they already have a Primary Key Non Clustered Index defined on these three columns. So, we cannot modify anything with the schema of the table!

3. I also found that they were using Implicit Transactions at the connection level. I also see trancount=2 in the deadlock graph which increased our suspicion on it. But again, after discussing it with the application team I came to the conclusion that they are doing explicit commit for every transaction they have within the code. SO, Implicit Transaction is not contributing to this deadlock.

As per the schema of the table TBL_ABC following points should be taken into consideration:

PK_TBL_ABC is the Primary Non Clustered Key on [COL_P AND COL_Q AND COL_R]

Data Types of the columns of our interest:

[COL_P] [varchar](50)

[COL_Q] [varchar](150)

[COL_R] [int]

I started thinking to look for ways to speed up the queries so locks won't be held for too long. That's when I decided to look at the plan of Update statement to see what route it is taking to update the column values.

Here is a part of the plan of the UPDATE statement that I got (removing unessential information):

|--Index Scan(OBJECT:([DatabaseName].[dbo].[TBL_ABC].[PK_TBL_ABC]),  WHERE:([DatabaseName].[dbo].[TBL_ABC].[COL_R]=[@P5] AND CONVERT_IMPLICIT(nvarchar(50),[DatabaseName].[dbo].[TBL_ABC].[COL_P],0)=[@P3] AND CONVERT_IMPLICIT(nvarchar(150),[DatabaseName].[dbo].[TBL_ABC].[COL_Q],0)=[@P4]) ORDERED FORWARD)

The RPC event of this statement was:

@P0 nvarchar(4000),@P1 datetime2,@P2 bigint,@P3 nvarchar(4000),@P4 nvarchar(4000),@P5 int)

UPDATE TBL_ABC SET COL_A=@P0, COL_B=@P1, COL_C=@P2 WHERE COL_P=@P3 AND COL_Q=@P4 AND COL_R=@P5

In the plan, there is an INDEX SCAN happening because of the "CONVERT_IMPLICIT". Point to be noted here is that as per the schema of the table Index: [PK_TBL_ABC] which is a PRIMARY KEY NONCLUSTERED ON [COL_P AND COL_Q AND COL_R]. Query’s search criteria matches with the columns on which we have Non Clustered Index. Optimizer should have opted for INDEX SEEK but rather chosen INDEX SCAN. If we make this INDEX SCAN as INDEX SEEK then we could have speed up the UPDATE statement and hence avoided this deadlock.

INDEX SCAN is happening because of CONVERT_IMPLICIT. CONVERT_IMPLICIT is not allowing optimizer to use the [PK_TBL_ABC] index! This is happening because we are comparing the value of a parameter against a table column. In order for SQL Server to do comparisons between two values of different data type, it has to 'upgrade' one type to match another. For example, if you want to compare a varchar value to nvarchar value, SQL Server needs to convert the varchar value to nvarchar.  If SQL Server does the opposite, there may be loss of data. And that is exactly what is happening in our case.

Upon segregating the above query plan for INDEX SCAN:

WHERE:([DatabaseName].[dbo].[TBL_ABC].[ COL_R]=[@P5] ------- No Convert!!

AND

CONVERT_IMPLICIT(nvarchar(50),[DatabaseName].[dbo].[ TBL_ABC].[ COL_P],0)=[@P3] ----------------->>>> Implicit Conversion

AND

CONVERT_IMPLICIT(nvarchar(150),[DatabaseName].[dbo].[ TBL_ABC].[ COL_Q=@],0)=[@P4]) ----------------->>>> Implicit Conversion

Looking at the parameters @P3 and @P4 from application:

(@P0 nvarchar(4000),

@P1 datetime2,

@P2 bigint,

@P3 nvarchar(4000),------------------------>>>>NVarchar (4000)

@P4 nvarchar(4000),------------------------>>>>Nvarchar (4000)

@P5 int)

By Definition of the table, columns COL_P and COL_Q are of type Varchar.

So, we need to convert the entire column as against the parameter value. Hence, the query couldn't use defined index [PK_TBL_ABC] for doing SEEK.

After doing some more research it was found that application was using JDBC Driver and then I came across a documentation from SQL CAT Team that stated the DEFAULT connection property of sendStringParametersAsUnicode is TRUE. If sendStringParametersAsUnicode property is set to "TRUE", String parameters are sent to the server in Unicode format: http://msdn.microsoft.com/en-us/library/ms378988.aspx. That means, if this property is set to False, the data type of the parameters being passed from application will be Varchar, not nVarchar and hence there will be no implicit conversion!

We set this property to False in the connection string and because of which no INDEX SCAN is happening. As a result of which no INDEX SCAN is seen in the plan and we have successfully resolved the deadlocks from SQL Server.

I hope this post is useful for a lot of developers and DBAs who work on resolving deadlocks. This is one of the unique issues that I found related to deadlock, but that doesn't mean that deadlock will not happen if you only made these changes. There could be lot of different other reasons behind deadlocks and I would recommend you to start working on resolving deadlocks by reading through blogs from Bart Duncan

Thanks for reading this blog!

Sumit Sarabhai

Technical Lead

Microsoft SQL Server Team

Leave a Comment
  • Please add 6 and 6 and type the answer here:
  • Post
  • Thank you for good information.

    I also wrote about Tuning JDBC for SQL Server on the book, "SQL Server MVP Deep Dives Vol2" will be published not so long.

    You can see the different behavior of Query Optimizer for unicode character If you use some of other data collations, ex. Korean_Wansung_CI_AS.

    I've been curious about that as well.

    Regards,

    Jungsun Kim

  • Excellent tips. Although this is meant for the JDBC driver, the change in behavior due to data type difference is very useful information. I have seen sql developers not pay attention to the type [especially for input parameters] as long as it is any form of string data. This should be a good point to keep in mind in such cases.

  • Sumit,

    Excellent post. Last Friday I was working on an issue and found queries are doing an implicit convert and was wondering why the application sends all character type queries as nVarchar. Saw this blog on the weekend and was spot on. The app was using an opensource driver jTDS instead of Microsoft JDBC and has the same flag as set to true by default.

    Thanks for this post, It helped a lot!!

    All the best !!

    Regards,

    Ashwin Menon

Page 1 of 1 (3 items)