Yukon Engine: CLR Integration III

Yukon Engine: CLR Integration III

  • Comments 6

The story so far:

At the end of the last instalment, we'd created a simple user-defined function in C# and used the CREATE ASSEMBLY and CREATE FUNCTION statements to catalogue the assembly and function. But so far we've really only scratched the surface of the capabilities of Yukon managed code. To reach a closer functional parity with Transact-SQL, we need to be able to access existing data in the database from managed code. In this instalment we therefore look at the new in-proc .NET Data Provider for SQL Server.

Since the inception of the .NET Framework, the recommended strategy for accessing relational data has been ADO.NET. To recap quickly, the classes pertaining to ADO.NET reside largely within the System.Data namespace, and provide a mechanism for connecting to databases and working with data either using a disconnected DataSet object or using a fire-hose forward-only cursor through a DataReader instance. To provide greater database neutrality, ADO.NET implements a provider model in which connected objects such as SqlConnection and OracleConnection implement a common interface IDbConnection. Most of the methods and properties you're likely to call are exposed via these common interfaces. (You'll find a far more exhaustive treatment of the subject in this book, which I highly recommend.)

Much of what you probably already know about ADO.NET also applies when writing managed code for Yukon. You'll find yourself working with common classes such as SqlConnection, SqlDataReader, and SqlTransaction, but there are a few differences that reflect the fact that the code now executes inside the database engine. Let's cover some of the main points of significance:

  • Since any data access now occurs in the same process as the database engine itself, there's a separate in-proc data provider. Instead of using classes in the System.Data.SqlClient namespace, managed code for Yukon uses the System.Data.SqlServer namespace.
  • There's no support in the SqlServer namespace for creating disconnected DataSet objects using a data adapter. This makes sense: it would be pointless to create a disconnected object within a permanently connected context.
  • On the other hand, there is support for the new SqlResultSet object that is introduced in the Whidbey release of ADO.NET. This provides a connected object rather like the old ADO Recordset that can be used for scrollable cursor-based navigation. Although set-based data manipulation is still desirable, the cost of a server-side cursor is greatly reduced in this context. One helpful attribute of a SqlResultSet is that it derives from SqlDataReader, allowing you to use it for data-binding even if the container hasn't been updated for Whidbey.
  • Instead of spinning up your own objects, the SqlServer namespace provides a class factory called SqlContext that creates connection, command, result set, error and transaction objects.
  • You can return information to the outside world by means of a SqlPipe object (also created by the SqlContext factory), which allows you to send strings, data readers and rows, as well as errors.

Let's have a look at a very simple stored procedure written in C#:

using System.Data.SqlServer;

public class PubsSprocs
   public static void GetBookInfo()
      using (SqlCommand cmd = SqlContext.GetCommand())
         cmd.CommandText = "SELECT * FROM authors";
         SqlPipe pipe = SqlContext.GetPipe();

To catalogue this against the pubs database, you'd use something like:

   CREATE PROCEDURE dbo.usp_get_author_info
      AS EXTERNAL NAME YukonCLR:[PubsSprocs]::GetBookInfo

Calling EXEC usp_get_author_info would of course give you the same results as executing the SELECT statement directly. This procedure isn't exactly a great candidate for managed code, of course, but it demonstrates some of the key concepts I've mentioned above. Notice the use of the SqlContext class to create the SqlCommand and SqlPipe instances; in fact, SqlPipe has a private constructor to prevent instantiation in any other way. You can also see how the pipe is used to output a DataReader object without deserialising it explicitly.

Using a Command object isn't actually the most scalable way to work with data if you're going to be calling the same command multiple times or from multiple users. The preferred route is to separate out the command definition itself from any run-time parameters that will be given to that command. To do this, you create a SqlDefinition object which contains a parameterised statement, and then instantiate a SqlExecutionContext object from the connection which supplies the parameter values and executes the statement. Here's a cut down code fragment which shows this in practice:

using System.Data;
using System.Data.Sql;
using System.Data.SqlServer;
using System.Data.SqlTypes;

public class YukonManagedClass
   static readonly SqlDefinition insertStmt;

   static YukonManagedClass()
      // Create the definition
      SqlCommand cmd = new SqlCommand();
      SqlParameter param = new SqlParameter();

      // Parameterised statement goes here...
      cmd.CommandText = "INSERT INTO MyTable " + 
         " VALUES (@Param1)";
      param = cmd.Parameters.Add("@Param1", 
         SqlDbType.NVarChar, 50);
      // Additional parameters here...

      insertStmt = new SqlDefinition(cmd);

   public static void MyStoredProc()
      SqlConnection conn = SqlContext.GetConnection();

      using (SqlExecutionContext ex = 
         ex.SetString(0, "some value");

This fragment comes from a stored procedure I wrote to create some dummy data for a sample I put together. You can see the full stored procedure here if you're interested, along with some sample SQL to catalogue the procedure and test it.

I was going to write about table-valued functions and Whidbey / Yukon integration at this point, but I've simply run out of time. I'll have to save those for Part 4, which will probably appear after the Christmas holidays. In the meantime, have a meaningful and relaxing Christmas and a happy and prosperous New Year!

  • Looking at your full procedure, it's not clear how the parameters to the managed code get translated from the SQL - i.e. does the name RowsRequired 'require' a sql parameter of @RowsRequired, or is it done by param order? brian P.S. How can I get a beta1 - the last time I looked the main yukon website indicated it was not generally available to developers.
  • Brian, the parameter names are declared in the CREATE PROCEDURE .. AS EXTERNAL NAME statement - in exactly the same way as if the procedure itself was written in T-SQL. See http://www.sneath.org/tim/dummydata.sql.txt for the example of this. On the SQL side, you can give the parameters any name you like, so long as they match the data types of the managed code. Does that make sense? Perhaps I've not explained it well enough. Unfortunately Yukon Beta 1 is a fairly limited beta; there's still quite a lot in flux and we're not quite ready to take large quantities of beta testers' bug reports. Beta 2 is due out in the next few months and will be more generally available. I figured that if I wrote a couple of articles on the subject I could give people a general sense of what's coming. Hope this helps, Tim
Page 1 of 1 (6 items)