In the first of this 3 part blog series I mentioned some of the improvements for a better manageability of a SQL Server environment: the Configuration Servers feature, the Resource Governor extensibility and the Auditing out-of-box solution. If you missed it you can find it here. This last feature can be also classified as a DataTracking technology and give me the opportunity to mention another new interesting data tracking technology: the SQL Server 2008 Change Data Capture technology, which allows us to catch information related to changes/operations occurred on our database: DML commands (INSERT, UPDATE and DELETE) and some information related to ALTER TABLE operations.
What does it mean? That if you execute for instance an UPDATE statement on a table and you configure SQL Server to track changes occurring on this specific table (using the out-of-the-box Change Data Capture functionality), you will get an image of the state of the updated record before and after the occurrence of the operation. Extra information like which column has been updated or what’s the name of the operation (update, insert, delete) are also part of this image.
One important thing to say is that Change Data Capture is an asynchronous technology. In fact changes are available just after a couple of seconds after the commit has occurred. In any case everything is transactional because all information needed to create tracking data is taken from the transaction log and therefore as you know, everything you read from the transaction log has already been committed.
It’s now time to see the TSQL enhancements:
· First, the SQL-Query Editor has intellisense support (if you don’t like it you can disable it). Inside the Query Editor there is also a new error list window that catches all errors, while you are editing TSQL statements.
· Assignment operators (+=, -=, *=, /=, %=) are now supported
· Something that I was waiting for (since a long time), is the possibility you now have to declare and immediately assign a value to a variable in just 1 statement:
DECLARE @TODAYDATE DATETIME = GETDATE()
· There is a new syntax for the VALUES clause, which allows you inserting multiple rows as an atomic operation. Now you can do something like this:
INSERT INTO dbo.publishers (pub_id, pub_name) VALUES (1, 'pub1'),(2, 'pub2'),(3, 'pub3')
You can also generate inline table expression in a simpler way:
SELECT * FROM (
(1, 'pub1', 'address1'),
(2, 'pub2', 'address2'),
(3, 'pub3', 'address3'),
(4, 'pub4', 'address4')
)AS (pub_id, pub_name, pub_address)
No more UNION ALL needed J !
· A new parameter type, the table value parameter (TVP) has been introduced. What does it mean? That you can create a SP and have parameters of type table. Which are the advantages? Think about what you do today if you have a client application that wants to send more data to a Stored Procedure. You can of course call the SP once for each element, or you can for instance open the connection within the client application, create a temporary table (#table), call the SP on the server and then access the temp table on the SP side and start to proceed all table’s records. Another approach, starting from SQL Server 2005, was to pass all parameters as an XML type to a SP. The problem with this solution was that once you are inside the SP, you need to parse the xml in order to extract the parameters.
Today with SQL Server 2008 all this is not anymore necessary; because you can pass an entire table as an input parameter to a SP (there is a new READONLY keyword). With this solution you can pass more records in a block and at the same time you also have strongly type check.
· The MERGE statement has also been introduced; it’s a kind of UPSERT statement with the difference that it is also able to do DELETE operations. With the MERGE statement you can therefore execute an atomic operation which contains logic for UPDATE, INSERT and DELETE operations. Instead of describing the command syntax (which is available in the SQL Server Online Documentation), I can maybe mention a couple of scenarios where the MERGE statement can be really useful. Imagine importing data in a datawarehouse or in general each time you want to synchronize a destination table with a source of data. As an example think about taking a file coming from a Mainframe, loading it in a table and validating all the data. As next step you could use a MERGE statement which could define the following rules: if the record doesn’t exist in the destination table, then an INSERT statement is needed, if it does, then you need an UPDATE, if it exists on the destination table but it doesn’t on the source table, then you need a DELETE.
· With the INSERT command you now have the ability to consume results over DML and what’s more interesting, you can filter the results. With this new feature you can for instance do a DELETE operation using the Output directive, which returns a result-set containing the deleted records and then use this result set (optionally applying a filter) as input for the INSERT statement.
This is the end of the second part. The focus of the third part, which I will publish next week, will be on the SQL Server 2008 new Data Types.
If you are interested in SQL Server 2008 and you want to get a deeper understanding of this new version, then come here in Wallisellen to our free “SQL Server 2008 for Developers (German)” TechTalk the coming 24th of June. Information and registration are available here.
PingBack from http://blog.a-foton.ru/2008/06/10/a-first-look-at-sql-server-2008-part-2-of-3/
In this last part of the “SQL Server 2008 first look” series I will set the focus on the new SQL Server