NOTE: the code in this BLOG is TSQL instead of ODBC calls. Since ODBC can be hard to understand and other API’s will have the same basic issues, I decided to use the simpler and more concise TSQL, which should also appeal to a wider audience.
An ISV I work with recently ran into an interesting problem; here is the description and solution.
Adding an unexpected trigger caused application code to fail due to incomplete SQL Syntax, and not reading through all returned results.
The ISV wanted to utilize the OUTPUT Clause of the UPDATE statement in their ODBC (SNAC) based application. The OUTPUT clause is very useful in providing data back to the application regarding the row, or rows, which were updated (or: inserted / deleted). In the example I use below, the application is interested in knowing the date/time of the updated row(s).
This could be accomplished by issuing the following statement:
UPDATE T SET COL2 = @Pcol2, COL3 = getdate() OUTPUT CAST(INSERTED.COL3 AS varchar(30))WHERE COL1 = @Pcol1
The ISV coded up the application expecting a return value for number of rows affected, and if that value was greater than 0 then it also returned the value of the inserted date/time.
This worked well, until an external Partner application added a trigger to the table listed in the UPDATE statement.
Example: CREATE TRIGGER [dbo].[TTrigger1] on [dbo].[T] after update as update t2 set col3 = 0
Now the application failed on the UPDATE statement with the following error message:
[Microsoft][SQL Native Client][SQL Server]The target table 'T' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.
The error message is self-explanatory, but was a surprise to the ISV application (and the application developer). The developer did not expect a trigger to ever be created on the table.
There are two different methods of getting OUTPUT data from an UPDATE statement;
· UPDATE with the OUTPUT clause only – this returns output results directly as part of the statement. This option cannot have a trigger defined on the table.
· UPDATE with OUTPUT and INTO clauses – this returns the output a specific table, or table variable. This option must be used if there is any possibility the table will have a trigger on it at any point.
· See the following website for complete the OUTPUT Clause documentation:
The developer then utilized the following syntax to send the same statement to SQL Server, and also to get the expected result back: declare @p165 table (col2 varchar(30));UPDATE T SET COL2 = ?, COL3 = getdate() OUTPUT CAST(INSERTED.COL3 AS varchar(30)) into @p165 WHERE COL1 = 1;select * from @p165
Now a subtlety occurred, can you guess what it was? If you guessed that additional results are returned you are correct.
The ODBC code returned data in a loop utilizing the following API calls: SQLFetch, SQLNumResultCols, SQLRowCount, SQLMoreResults:
· The first results returned were the number of rows affected by the trigger, not the number of rows affected by the UPDATE statement, which was what the application was actually expecting
· The second set of results were the number of rows affected by the UPDATE statement
· The third set of results were the number of rows returned by the SELECT statement reading the table variable
· And finally, the actual data from the updated row(s) – which is what we really wanted in the first place!
So, the lessons to be learned here are:
1. Be aware that triggers will affect your UPDATE statements if utilizing the OUTPUT clause
2. You should utilize the INTO clause to avoid the issue
3. Always use SQLMoreResults to read all of the result-sets that could be returned from SELECT, UPDATE, INSERT, or DELETE statements.
4. Triggers should include the ‘SET NOCOUNT ON’ statement to avoid returning the ‘affected number of rows’.
The application was changed to utilize the INTO clause, and SQLMoreResults was used to return all the resulting data. Using SET NOCOUNT ON in trigger logic is also a best practice that prevents additional results ‘Rows affected’ from being generated.
Here is a script to duplicate the issues I’ve described:
------You may want to run this script in steps from comment – to comment
------so you can follow along, instead of running the entire script at once
CREATE TABLE t(
[col1] [int] NOT NULL,
[col2] [varchar](30) NULL,
[col3] [datetime] NULL
) ON [PRIMARY]
insert into t values (1,'abc', getdate())
select * from t
UPDATE t SET col2 = 'Peter', col3 = getdate()
OUTPUT CAST(INSERTED.col3 AS varchar(30))WHERE col1 = 1
------So far everything is good, Now let’s add the new table and the trigger
CREATE TABLE t2(
[col1] [int] NULL,
[col2] [datetime] NULL
insert into t2 values (2, getdate())
select * from t2
------In this example, the trigger: ttr1 will update the rows
------of a second table: t2
CREATE TRIGGER ttr1 on t after update as update t2 set col1 = 0
------OK, let’s try now with the trigger on
UPDATE t SET col2 = 'Peter', col3 = getdate() OUTPUT CAST(INSERTED.col3 AS varchar(30))WHERE col1 = 1
------Chances are good you got the following error message
--Msg 334, Level 16, State 1, Line 1
--The target table 't' of the DML statement cannot have any enabled triggers --if the statement contains an OUTPUT clause without INTO clause.
----- let’s fix that now.
declare @p1 varchar(30)
UPDATE t SET col2 = 'Peter', col3 = getdate() OUTPUT CAST(INSERTED.col3 AS varchar(30))into @p1 WHERE col1 = 1
------Notice this failed as well with the following error message
--Msg 1087, Level 16, State 1, Line 2
--Must declare the table variable "@p1".
------We need to use a table
------for this to work correctly we must use a table or
------a table variable where the ‘INTO’ data will reside,
------and be retrieved from
declare @p1 table (col2 varchar(30))
select * from @p1
--Now you get what we were originally looking for
-- the date/times of the rows that were updated
--Look at the results under the 'Messages' tab as well...
--you will see the number of rows affected:
-- 2 for the rows inserted as part of the trigger
-- 3 for the rows Updated
-- and 3 for the rows we selected from the table variable
--Now, you can see that the application must utilize SQLMoreResults if it
--wants to return all the valid results.