• SQL Server Cast

    How to check Fragmentation on SQL Server 2005


    I have been asked this question several times: Is there a tool in SQL Server where I can easily check database fragmentation? The answer, as with many other things, would be "Yes" and "No".

    Fragmentation is a "natural" process in the database lifecycle. If the database is frequently updated via INSERT, UPDATE, or DELETE statements we can expect it to become fragmented over the time. If database indexes are fragmented, SQL Server query optimizer will take not-so-optimal decisions when using and index to resolve a query, affecting the overall query performance. SQL Server provides tools to check and fix database fragmentation but we need first to understand how to use these tools. This is something just a little bit more difficult than simply firing up Windows defrag.exe to check for filesystem fragmentation.

    Let's start with some theory so we can better understand what fragmentation is and how it affects database performance. There are two different types of fragmentation in SQL Server: Internal and External. Internal fragmentation is the result of index pages taking up more space than needed. It is like having a book where some of the pages are left blank; we do not know what pages are blank until we read the entire book and same applies for SQL Server, which has to read all the pages in the index wasting extra-time and server resources in the empty pages. External fragmentation occurs when the pages are not contiguous on the index. Following the book analogy, it is like having a book where pages are not ordered in a logical way (page 1, then page 2, then page 3 and so on) causing you to go back and forward to compound the information and make sense of the reading. Heavily used tables that contains fragmented indexes will impact your database performance. If you are still unsure about what external and internal fragmentation means, refer to this article published on SQL Server Magazine by Kalen Delaney (although written for SQL Server 2000, definitions are still valid for SQL Server 2005).

    In our minds we associate fragmentation to something bad that should be avoided at any cost but, is this always the case? Not with internal fragmentation. If your tables are frequently changed via UPDATE and INSERT operations, having a small amount of free space on the index or data pages (having a small amount of internal fragmentation) will cause a new page addition (page split) in order to allocate that new data. This leads ultimately to external fragmentation since the new added data page won't be probably adjacent to the original page. Internal fragmentation, therefore, can be desirable at low levels in order to avoid frequent page split, while external fragmentation, however, should always be avoided. Please understand that by 'low levels' I simply mean 'low levels'. The amount of free space that can be reserved on a index can be controlled using the Fill Factor.

    It is also important to understand that by external fragmentation we do not mean filesystem fragmentation or disk fragmentation.

    SQL Server 2005 introduces a new DMV (Dynamic Management View) to check index fragmentation levels: sys.dm_db_index_physical_stats. Although SQL Server 2005 still supports the SQL Server 2000 DBCC SHOWCONTING command, this feature will be removed on a future version of SQL Server. Here you can check the differences between both instructions when checking for fragmentation on the HumanResources.Employee table in the sample database AdventureWorks:

    • DBCC SWOWCONTING example:

    USE AdventureWorks;
    DBCC SHOWCONTIG ('HumanResources.Employee')

    DBCC SHOWCONTIG scanning 'Employee' table...
    Table: 'Employee' (869578136); index ID: 1, database ID: 6
    TABLE level scan performed.
    - Pages Scanned................................: 7
    - Extents Scanned..............................: 3
    - Extent Switches..............................: 2
    - Avg. Pages per Extent........................: 2.3
    - Scan Density [Best Count:Actual Count].......: 33.33% [1:3]
    - Logical Scan Fragmentation ..................: 14.29%
    - Extent Scan Fragmentation ...................: 33.33%
    - Avg. Bytes Free per Page.....................: 172.6
    - Avg. Page Density (full).....................: 97.87%

    • sys.dm_db_index_physical_stats DMV example:

    USE AdventureWorks
    SELECT object_id, index_id, avg_fragmentation_in_percent, page_count
    FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'), OBJECT_ID('HumanResources.Employee'), NULL, NULL, NULL);

    In this last example I have selected only relevant information to show from the DMV, you will see that DMV can provide much more details about the index structure. In case you wanted to show fragmentation details for all the objects in the AdventureWorks database, the command would be as follows:

    SELECT *
    FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'), NULL, NULL, NULL , NULL);

    Please, refer to SQL Server 2005 Books Online for more information on sys.dm_db_index_physical_stats syntax.

    How do we know if our database is fragmented? We have to pay attention to the avg_fragmentation_in_percent value. A value between 5-30% indicates moderate fragmentation, while any value over 30% indicates high fragmentation (book pages missing any order).

    The avg_page_space_used_in_percent is another value that it is worth to look closely. This value represent the amount of spaced used in the indexes. A value below 75% is usually associated to internal fragmentation (more blank pages on our book than recommended).

    In order to reduce fragmentation we will have to reorganize or rebuild the indexes. Choosing between reorganizing or rebuilding depends on the resultant values. For moderate fragmentation index reorganization will be enough, for heavily fragmented indexes a rebuild process is needed. The following table summarizes when to use each one (refer to this Books Online article for an in-depth coverage of this information):

    Reference Values (in %) Action SQL statement

    avg_fragmentation_in_percent > 5 AND < 30


    avg_fragmentation_in_percent > 30


    REORGANIZE statement is always executed online while REBUILD index is executed offline by default and can be optionally executed while database is accessed by users with the ONLINE statement, which can be a plus for 24x7 environments. This approach, which is only available in SQL Server 2005, has some limitations; refer to the ALTER INDEX statement in SQL Sever Books Online for more details.

    If you are looking for an easy way to automate these processes the SQL Server Books Online reference for the sys.dm_db_index_physical_stats contains a sample script you can implements within minutes. This script will take care of reorganizing any index where avg_fragmentation_in_percent is below 30% and rebuilding any index where this values is over 30% (you can change this parameters for your specific needs). Add a new SQL Server Execute T-SQL statement task to your weekly or daily maintenance plan containing this script so you can keep you database fragmentation at optimum level.

  • SQL Server Cast

    Open Windows Firewall Ports for SQL Server the Easy Way


    After installing a new SQL Server 2008 instance on my Windows 7 laptop I find myself unable to logon locally into the instance by using SQL Server Management Studio (SSMS). I know that due to the changes introduced in Windows Server 2008 and Windows Vista Firewall this was possibly due to an open port missing.

    The Configuring the Windows Firewall to Allow SQL Server Access Books Online entry contains the information to open the required ports in Windows Firewall for each one of the SQL Server services. However, I recently discover a much easier way to open the required Windows Firewall ports for SQL Server on the following Knowledge Base Article (KB968872):


    This article includes the Microsoft "Fix it" troubleshooter that allows to quickly fix the related problem using an easy-to-follow setup. The "Fix it" button is designed for Windows Server 2008 but the script that is executed by the setup is included on the same KB article and can be executed on Windows Vista or Windows 7 as well:

    @echo =========  SQL Server Ports  ===================
    @echo Enabling SQLServer default instance port 1433
    netsh firewall set portopening TCP 1433 "SQLServer"
    @echo Enabling Dedicated Admin Connection port 1434
    netsh firewall set portopening TCP 1434 "SQL Admin Connection"
    @echo Enabling conventional SQL Server Service Broker port 4022 
    netsh firewall set portopening TCP 4022 "SQL Service Broker"
    @echo Enabling Transact-SQL Debugger/RPC port 135
    netsh firewall set portopening TCP 135 "SQL Debugger/RPC"
    @echo =========  Analysis Services Ports  ==============
    @echo Enabling SSAS Default Instance port 2383
    netsh firewall set portopening TCP 2383 "Analysis Services"
    @echo Enabling SQL Server Browser Service port 2382
    netsh firewall set portopening TCP 2382 "SQL Browser"
    @echo =========  Misc Applications  ==============
    @echo Enabling HTTP port 80
    netsh firewall set portopening TCP 80 "HTTP"
    @echo Enabling SSL port 443
    netsh firewall set portopening TCP 443 "SSL"
    @echo Enabling port for SQL Server Browser Service's 'Browse' Button
    netsh firewall set portopening UDP 1434 "SQL Browser"
    @echo Allowing multicast broadcast response on UDP (Browser Service Enumerations OK)
    netsh firewall set multicastbroadcastresponse ENABLE


    Executing this on Windows 7 will show several warnings since the netsh firewall command in deprecated on this Windows version (netsh advfirewall firewall is the recommended method) but the script will perform its function nevertheless.

    Kudos for the Microsoft "Fix it" team!

  • SQL Server Cast

    Enabling Certificate for SSL on a SQL Server 2005 Clustered Installation

    This week I has been working on a case about the installation of a certificate on a SQL Server cluster. My customer wanted to use certificates to encrypt client connections via Secure Sockets Layer (SSL), a method supported by SQL Server 2005 that allows to secure communications between clients and database servers (you can read more information about this functionality on Books Online). Although my customer was currently working with SSL on SQL Server standalone servers he was finding problems installing the certificate on a new cluster. Although setting up the certificate and encryption in a standalone installation is pretty straightforward (see KB316898) cluster installation differs, and requires extra configuration steps.

    Obtaining the certificate

    Since I was not familiar with the certificate installation on a clustered SQL Server instance, I started to read the available documentation in SQL Server Books Online, and more specifically, the how-to provided in the Enable Encrypted Connections to the Database Engine (SQL Server Configuration Manager) article and the Microsoft Knowledge Base article KB316898, How to enable SSL encryption for an instance of SQL Server by using Microsoft Management Console (this one only intended for standalone installations).

    First I needed a certificate to do the testing. I started installing an stand-alone root certification authority (CA) on my lab domain following the guidelines describe here. I did also installed the convenient web enrollment feature, which allows a client to request a certificate from a CA using a web browser (if you do not want to deal with all the CA stuff, read "A Simpler Method to Obtain Certificates for Testing" at the end of this post).

    I did request the certificate from one of the SQL Server nodes. According to the Books Online article, the certificate should be installed using the SQL Service account and it has to be placed on the local computer > personal certificates store:

    Certificates are stored locally for the users on the computer. To install a certificate for use by SQL Server, you must be running SQL Server Configuration Manager under the same user account as the SQL Server service unless the service is running as LocalSystem, NetworkService, or LocalService, in which case you may use an administrative account.

    This is because the certificate contains a private key that needs to be accessed by SQL Server service account for the SSL communication to work. If we use any other user account (say, local administrator) SQL Server service account will not be able to access the private key and the authentication will fail. The certificate can be optionally installed on the current user certificate store, but installing it on the computer certificate store makes the certificate available for any user.

    I did logon in the server using the SQL Service account, since this is a cluster, this has to be a domain account. I did open the web enrollment site (by default under http://CAName/certsvr, where CAName is the Certification Authority's hostname) and requested a new Server Authentication Certificate. The most important thing at this point is to specify the fully qualify DNS name of the clustered instance in the "Name" property for this certificate; the SSL encryption will not work if we specify the name of the cluster node:

    If you want to use encryption with a failover cluster, you must install the server certificate with the fully qualified DNS name of the failover clustered instance on all nodes in the failover cluster. For example, if you have a two-node cluster, with nodes named test1.your company.com and test2. your company.com and a failover clustered instance of SQL Server named fcisql, you must obtain a certificate for fcisql.your company.com and install the certificate on both nodes.

    In my case, the SQL Network Name cluster resource was called "SQL-2005" so I did specify sql-2005.contoso.lab as Name property:

    Certficate Request using Web Enrolment


    While requesting the certificate, make sure that the option "Mark keys as exportable" is enabled so the private key is available when exporting and importing the certificate in the rest of the cluster nodes. I did also check the "Store certificate in the local computer certificate store" so I do not have to import the certificate later. After submitting the certificate request I did back to my CA and approved the request. Finally I returned to the web enrollment site to install the certificate.

    At this time the server authentication certificate was installed but I still needed to install the root CA certificate; the Books Online article linked earlier explains why this is needed:

    If the instance of SQL Server is running on a computer that has been assigned a certificate from a public certification authority, identity of the computer and the instance of SQL Server is vouched for by the chain of certificates that lead to the trusted root authority. Such server validation requires that the computer on which the client application is running be configured to trust the root authority of the certificate that is used by the server.

    If the root CA certificate is not installed, the SQL virtual server certificate cannot be verified against a a trusted certification authority, this can be quickly checked accessing the installed certificate properties:

    Certificate Properties


    I did obtain a CA root certificate from my test Certification Authority and installed it on the Trusted Root Certification Authority store for the local computer.

    The certificate must meet some requirements to be used for SSL (the requirements can be found on the Certificate Requirements section of this Books Online article). To make sure this certificate fulfilled al the requirements I did export the certificate to a local directory and check its details using certutil.exe command-line utility (more info about this utility can be found here). Additionally, I did also export the SSL certificate from the first cluster node (we will need to import this certificate on each of the cluster nodes later).

    Make the Certificate Works for SSL

    The certificate used by SQL Server to encrypt connections is specified in the following registry key:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.x\MSSQLServer\SuperSocketNetLib\Certificate

    This key contains a property of the certificate known as thumbprint that identifies each certificate in the server. I did find out that in a SQL Server clustered installation this key is not correctly updated with the corresponding certificate thumbprint. In my case, after importing the certificate, the registry key did show a null value:

    Certificate Registry Key in SQL Server


    This registry key should contain the thumbprint property of the certificate we want to use so I did copy the thumbprint from the certificate details tab and pasted it into Notepad to remove the extra spaces:

    Certificate Thumprint


    and finally copied the corresponding thumbprint string on the "Certificate" registry key (the "Certificate" registry key requires the certificate thumbprint to be included with no extra spaces):

    Editing certificate thumbprint in Notepad


    Just after the registry change, I performed a cluster failover on the second cluster node and rebooted the first cluster node.

    On the second cluster node I imported both the certificate issued for the virtual SQL Server and the root CA certificate. I did perform the previous steps to have the certificates installed and the thumbprint string copied on the corresponding registry key. Finally, I did also rebooted this second cluster node. Once all the cluster nodes were up and running I checked that all of them had the same thumbprint string on the "Certificate" registry key. If your certificate has been correctly configured, you will see the following message logged on the SQL Server ERRORLOG file on the next service startup:

    2008-02-18-23:37:25.01 Server     The certificate was successfully loaded for encryption

    To enable SSL in SQL, I did change SQL Server network protocol properties for the clustered instance setting the "Force Encryption option" to "Yes". You will need to restart the SQL Server service for the change to take effect, but once it is done all the communication attempts with your database will need to be encrypted:

    Protocol Properties Configuration


    Note that if you try to select the corresponding SSL certificate on the "Certificate" tab of the SQL Server Protocols properties, you will see that the installed certificate does not show up. This behavior is a known issue in a clustered installation. SQL Server configuration manager search by default on the local computer personal certificates store and tries to mach an existing certificate with the fully qualified domain name (FQDN) of the local computer. Since the installed certificate is not associated to the cluster node FQDN but with the virtual SQL Server FQDN, the corresponding certificate is not shown on the GUI. To make sure what certificate is in use for SSL check the corresponding thumbprint string on the abovementioned registry key.

    In addition to server, the client has to be also configured to support communication over SSL. To do so I imported the certificate on the client computer using the Certificates snap-in and configured the "Force protocol encryption" to "Yes" under the SQL Native Client Configuration (this option is available via the "Flags" page).

    Testing the Connection

    From my client PC I did test the connection to the remote SQL Server using the "Encrypt Connection" option available in SQL Server Configuration Manager. To verify that the connection was indeed encrypted I fired up a network analyzer to have a view of the traffic between my computer and the remote SQL Server. The SSL protocol was indeed being use in this connection, as we can see in Network Monitor screen capture:


    In this post I have tried to provide you with the required procedure to have a certificate installed and configured for SSL in a clustered SQL Server installation. The links included in this post will provide you with a better understanding of this procedure and a much better foundation to implement encrypted client-server communications on your infrastructure.

    A Simpler Method to Obtain Certificates for Testing

    If you are in trouble trying to find a Certification Authority for your tests or if you do not want to deal with the CA installation and configuration process, you can use the MakeCert.exe certificate creation tool. This Books Online article describes how to quickly setup you certificate requirements for your own testing using MakeCert.exe.

  • SQL Server Cast

    Synchronize two tables using SQL Server Integration Services (SSIS)–Part I of II


    There are situations where a SQL Server DBA needs to keep two different tables in sync. Usually, the DBA needs to keep a copy of a table in a in a data warehouse repository that is used as a solution for archiving and/or reporting.

    SQL Server provides a robust method for keeping data synchronized across databases using Replication but there are situations when all we need is just to keep an online copy of a single table for archiving or reporting purposes and we would prefer to do not tinker into SQL Server replication.

    This post is divided into two parts: Part I explains how to update a destination table with the information that is added into a source table while Part II explains how to replicate any change that happens in existing records in the source table into destination. Please, take into account this is just another way of doing this, there are many other possibilities but the one listed here is probably one of the fastest to implement and modify on-demand.

    This procedure is based on the following scenario: A “table A” on “database A” is periodically replicated using SSIS into “table B” on “database B”. “Table A” is updated with new records and we need to copy those records into “table B” too. Our final implementation will look like this in SQL Server Business Intelligence Development Studio:



    Let’s see how this works:

    1. “Source Table” is “table A” in “database A” while “Dest Table” is destination “table B” in “database B”. We start creating two different OLEDB connections in the Data Flow component in SSIS using both the source and destination table as data sources.

    2. We need to perform a JOIN operation on the two different data sources to copy the information we need from one table into the other. For this JOIN to work correctly, the data has to be sorted; this is described in the following MSDN link:

    In Integration Services, the Merge and Merge Join transformations require sorted data for their inputs. The input data must be sorted physically, and sort options must be set on the outputs and the output columns in the source or in the upstream transformation. If the sort options indicate that the data is sorted, but the data is not actually sorted, the results of the merge or merge join operation are unpredictable.

    The “Merge Join” operation is where we filter the data that has been added in the source table (the one we need) and the data that has not been added (the one we do not need). In our case the source table (on the left) contains all the columns we want to copy while on the destination table (on the right) contains only the record that corresponds to the Primary Key column “No_”. Here is the description of this task:


    Here is the key part of the process: the Left Outer Join retrieves all the records in the source table but those that do not exist on the destination table are retrieved as NULLs in the Join Key column “No_”. This is also described in the product documentation:

    To include all products, regardless of whether a review has been written for one, use an ISO left outer join. The following is the query:

    USE AdventureWorks2008R2;
    SELECT p.Name, pr.ProductReviewID
    FROM Production.Product p
    LEFT OUTER JOIN Production.ProductReview pr
    ON p.ProductID = pr.ProductID

    The LEFT OUTER JOIN includes all rows in the Product table in the results, whether or not there is a match on the ProductID column in the ProductReview table. Notice that in the results where there is no matching product review ID for a product, the row contains a null value in the ProductReviewID column.

    3. Then we have to split the data we need from the one we do not need. For this we use a “Conditional Split” task that takes care of saving the information for those records where the Join Key “No_” was NULL, i.e. saving the information only of the new records. Here is a description of this conditional split task:


    4. Finally we INSERT the resultant data from the Split Conditional task into the destination table, which is the same we use as the source table at the very beginning.

    In this example the Join task is configured so you can reuse the tasks as many times as you want, the records on the destination table will not be duplicated with the information on the source table, only the new records will be populated into the destination table.

  • SQL Server Cast



    This time I want to discuss a support case that took me some days to get fixed. This case may be of interest for anyone having problems connecting a SQL Server installation to an Oracle data source using SQL Server Integration Services (SSIS).

    In this case my customer was trying to connect to an Oracle Server in order to copy several tables into SQL Server 2005. SQL Server Integration Services is a great tool for this purpose, you can configure how the copy job takes place using SQL Server Business Intelligence Development Studio (aka BIDS) and check from an user-friendly interface how each step is executed. BIDS is in fact a version of Visual Studio 2005 (the executable name is devenv.exe) with the components required to work with the different SQL Server services and components.

    My customer was executing BIDS from her own computer without issues but was finding a problem while working with BIDS from the server where SQL Server was installed. The errors she was getting were:

    [Connection manager "mib.bigcorp.com.gca"] Error: SSIS Error Code DTS_E_OLEDBERROR. 
    An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available.
    Source: "Microsoft OLE DB Provider for Oracle"  Hresult: 0x80004005
    Description: "ORA-06413: Connection not open.".

    [OLE DB Source [1]] Error:
    The AcquireConnection method call to the connection manager "mib.bigcorp.com.gca"
    failed with error code 0xC0202009. 
    There may be error messages posted before this with more information on why the
    AcquireConnection method call failed.

    Since the customer was using the Microsoft OLE DB provider for Oracle to setup the connection, I did a research on the available documentation in Microsoft Support website using the terms "oracle", "oledb" and "provider". I did find several documents discussing different problems but one of them was of special interest here:

    INFO: Limitations of Microsoft Oracle ODBC Driver and OLEDB Provider (KB244661)

    SUMMARY: This article describes the limitations of the current version of Microsoft ODBC Driver and OLE DB Provider for Oracle. These components are currently in maintenance mode. No new updates are planned for future releases of these components including their support against versions later than Oracle 8i.

    According to this article, it is not recommended to make use of the Microsoft OLE DB Provider for Oracle if the Oracle server is a version beyond 8i (my customer was using version 9i). This does not mean that the Microsoft OLE DB Provider did not work at all (indeed I was able to preview the remote table in the Oracle server and test connectivity successfully from within SSIS using this provider) but the functionality offered will be reduced and there was a good chance some incompatibility problem happened.

    I asked the customer to install the Oracle client on the server (different versions of the Oracle client software are available here). This client provides the required OLE DB Provider from Oracle that enables connectivity to an Oracle data source. Only some hours later the customer back to me saying "I have installed the Oracle client, but the only Oracle OLE DB component I am able to select in SSIS is the Microsoft one". Ok, I knew that one from a different support incident I had in the past. SQL Server Integration Services runs as a 32-bit application on a 64-bit server, as you can quickly check using Windows Task Manager:


    This means that, unless you install the Oracle Client for 32-bit on the 64-bit server, you won't be able to access the Oracle OLE DB Provider. This applies not only for the Oracle OLE DB, but for any other 32-bit provider. You can read a full explanation about why you need the 32-bit components in your 64-bit server here:

    The 32-bit SSIS Designer [BIDS] displays only 32-bit providers that are installed on the local computer. To configure a connection manager to use a specific 64-bit provider, you must install the 32-bit version of the provider on the development computer for use at design time. Even though the 32-bit version of the provider is installed, you can still run the package in 64-bit mode, both in the development environment and after deployment. The 32-bit and 64-bit versions of a provider have the same ID. Therefore, the SSIS runtime will select the appropriate version of the provider to use.

    But having the 32-bit component is not enough, we need both the 32-bit and 64-bit components installed side-by-side. Wit only the 32-bit Oracle client the package will fail when executing from BIDS, this is because there is only one SSIS service on a server, and it is indeed a 64-bit process when running on a 64-bit server as we can see on this Process Explorer screen capture (process name is MsDtsSrvr.exe):



    Ok, we have now the SSIS package using the Oracle OLE DB provider from Oracle and we have verified that both the 32-bit and 64-bit version of the Oracle client are installed on the Windows server.

    However, the package kept failing with the same error.

    After several hours tracing down the error with the help of Sysinternals Process Explorer, Process Monitor and the Internet we did find the a possible cause for this error. Apparently some Oracle client versions are affected by a bug (Oracle bug number 3807408) that causes the client networking layer to incorrectly parse program locations that contain parenthesis in the path. You can find a brief discussion of this behavior on this thread in Oracle forums.

    As you probably know, on a Windows Server x64 installation, the default program files directory is "C:\Program Files\" but all the 32-bit components are installed in "C:\Program Files (x86)\" directory. Since my customer did not have access to Oracle support at that time, we decide to test if this was the root cause of our problem copying the contents of the "C:\Program Files (x86)\Microsoft Visual Studio 8" folder under a new Program Files folder called "C:\ProgramFilesx86". We started BIDS (devenv.exe) from the new directory and executed the package both from BIDS and SQL Server, this time successfully.

    The trick finally did the job and my customer was able to successfully import the required tables into SQL Sever, however, I do not think this is a good solutions (much less "elegant" solution) and I recommended her to contact Oracle support for a version of the Oracle client no affected by this bug.

  • SQL Server Cast

    “SQLBrowser start request failed” Error Installing SQL Server 2008


    I worked last day on an very tricky SQL Server 2008 setup problem and think it is worth the time sharing here what I discovered. In this case my customer was trying to install SQL Server 2008 on a Windows XP SP3 computer to support a locally installed ERP software package.

    The SQL Server 2008 setup was working fine at the beginning but failing in the middle of the process with the following error message:

    SQLBrowser start request failed. Click Retry to retry the failed action, or click Cancel to cancel this action and continue setup

    Clicking on the Retry button did retry the start operation but failed again with the same error so we had no other option but cancel the setup process.

    Checking the Windows System Event log did not provide any hint about the root-cause of the problem so I decide to check for the SQL Server 2008 setup log that can be found in C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log folder; there are different files included in this folder and each one of them provide information about each one of the components installed (SSIS, AS, Engine, Client Tools, etc.). The Detail.txt log file, as the name implies, includes very detailed information of the the overall setup process and it is usually the first thing I look into when dealing with setup issues; here is what we found in our case:

    2009-05-13 17:25:13 SQLBrowser: The last attempted operation: Starting the SQL Server Browser
    service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete.
    2009-05-13 17:25:13 Slp: Prompting user if they want to retry this action due to the following failure:
    2009-05-13 17:25:13 Slp: --------------------------------------
    2009-05-13 17:25:13 Slp: The following is an exception stack listing the exceptions in outermost to innermost order
    2009-05-13 17:25:13 Slp: Inner exceptions are being indented
    2009-05-13 17:25:13 Slp:
    2009-05-13 17:25:13 Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException 2009-05-13 17:25:13 Slp: Message:
    2009-05-13 17:25:13 Slp: Service ‘SQLBrowser’ start request failed.
    2009-05-13 17:25:13 Slp: Data:
    2009-05-13 17:25:13 Slp: Feature = SQL_Browser_Redist_SqlBrowser_Cpu32
    2009-05-13 17:25:13 Slp: Timing = Startup
    2009-05-13 17:25:13 Slp: DisableRetry = true

    So I could see that indeed the SQLBrowser service was being created but could not be started for some reason. I decided then to check the Summary.txt setup log file for a more comprehensive list of what components were and were not installed successfully; this file can be also located under the \LOG folder and contains a brief summary of the actions performed by the SQL Server setup. Interestingly enough, the error on this summary setup log file was not pointing to the Browser service but to MSXML 6.0 (Microsoft XML engine):

    Detailed results:
    Feature:                       Database Services
    Status:                        Failure                    
    MSI status:                    Failure
      MSI error code:                0x5EBE5729
      MSI log file location:         C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20090514_170659\Msxml6_Cpu32_1.log

    So I looked up into this setup log file (Msxml6_Cpu32_1.log):

    MSI (s) (30:58) [17:22:37:661]: Note: 1: 1708
    MSI (s) (30:58) [17:22:37:661]: Product: MSXML 6.0 Parser (KB933579) – Installation error.

    MSI (s) (30:58) [17:22:37:661]: Windows Installer installed the product. Product Name: MSXML 6.0 Parser (KB933579). Product Version: 6.10.1200.0. Product Language: 3082. Installation success or error status: 1603.

    MSI (s) (30:58) [17:22:37:661]: Cleaning up uninstalled install packages, if any exist
    MSI (s) (30:58) [17:22:37:661]: MainEngineThread is returning 1603
    MSI (s) (30:68) [17:22:37:770]: No System Restore sequence number for this installation.

    As you can see, SQL Server Browser was not the only component failing during the setup, the MSXML 6.0 engine was failing too. Comparing time from both Detail.txt and Msxml6_Cpu32_1.log files I could see the later was newer; in other words, the MSXML setup error was happening before the SQL Server Browser error. I went back to the Detail.txt file to confirm this point:

    2009-05-13 17:22:36 Slp: Running Action: Install_Msxml6_Cpu32_Action
    2009-05-13 17:22:36 Slp: Target package: "D:\x86\setup\x86\msxml6.msi"
    2009-05-13 17:22:37 Slp: InstallPackage: MsiInstallProduct returned the result code 1603.
    2009-05-13 17:22:38 Slp: Sco: Attempting to write hklm registry key SOFTWARE\Microsoft\Microsoft SQL Server to file C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20090514_170659\Registry_SOFTWARE_Microsoft_Microsoft SQL Server.reg_
    2009-05-13 17:22:38 Slp: Sco: Attempting to write hklm registry key SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall to file C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20090514_170659\Registry_SOFTWARE_Microsoft_Windows_CurrentVersion_Uninstall.reg_
    2009-05-13 17:22:38 Slp: Sco: Attempting to write hklm registry key SOFTWARE\Microsoft\MSSQLServer to file C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20090514_170659\Registry_SOFTWARE_Microsoft_MSSQLServer.reg_
    2009-05-13 17:22:43 Slp:
    2009-05-13 17:22:43 Slp: Watson bucket for Msi based failure has been created
    2009-05-13 17:22:43 Slp: No retry-able MSI return code detected.
    2009-05-13 17:22:43 Slp: Checkpoint: INSTALL_MSXML6_CPU32_ACTION
    2009-05-13 17:22:43 Slp: Completed Action: Install_Msxml6_Cpu32_Action, returned False
    2009-05-13 17:22:43 Slp: Error: Action "Install_Msxml6_Cpu32_Action" failed during execution.

    Checking into Add/Remove Programs in Control Panel I could see the MSXML 6.0 engine listed as an installed. This is one of the shared component installed by SQL Server 2008 but in my case the customer was unsure if this component was previously installed on the Windows XP computer. I decided to remove the MSXML 6.0 engine from the PC using the Add/Remove Programs Control Panel applet and try the SQL Server 2008 setup again. This time the installation completed with no errors.

    In this case the initial SQL Server Browser error was misleading because it was the first error that was shown by the Setup GUI. Only by looking inside the SQL Server setup log files we were able to discover the MSXML 6.0 installation issue the took place before.

    Another possible way of tracing down this error is by looking into the Watson Bucket log file that exists on the same setup “Log” directory:

    In our case this was the content of this file:

    Watson bucket data:
      Bucket param 1: 10.0.1600.22
      Bucket param 2: 6.10.1200.0
      Bucket param 3: msxml6.msi
      Bucket param 4: 0x2D2816FE
      Bucket param 5: 0x5EBE5729
      Bucket param 6: Install_Msxml6_Cpu32_Action
      Bucket param 7:
      Bucket param 8:
      Bucket param 9:
      Bucket param 10:

    I have reviewed the SQL Server 2008 SP1 notes and this issue has been solved there so another way to avoid this problem would be by installing a slipstreamed SQL Server 2008 media, slipstreaming is a new SQL Server functionality that allows the database administrator to integrate Service Packs into the installation media.

  • SQL Server Cast

    Using Windows Cluster log to troubleshoot SQL Server cluster errors


    There are times where all the SQL Server knowledge in the world won't help us to find a solution to our cluster problem. The following post will show you how to use the Microsoft Windows cluster log file to find the root-cause of a common SQL Server issue (it does not matter what SQL Server version we use for the purpose of this post).

    In this case the customer called to Microsoft PSS due to a problem in which a SQL Server 2000 clustered instance was not starting up for an unknown reason. Looking at the customer Cluster Administrator, this was the situation of the SQL Server 2000 cluster group, called "SHILOH":


    We tried to diagnose what was wrong using the SQL Server ERRORLOG file, but we could not find any useful information there. After a few seconds online, the SQL Server service was stopped by Service Control Manager for no apparent reason:

    2008-07-27 17:28:04.51 server    Copyright (C) 1988-2002 Microsoft Corporation.
    2008-07-27 17:28:04.51 server    All rights reserved.
    2008-07-27 17:28:04.51 server    Server Process ID is 304.
    2008-07-27 17:28:04.51 server    Logging SQL Server messages in file 'S:\Program Files\Microsoft SQL Server\MSSQL$SHILOH\log\ERRORLOG'.
    2008-07-27 17:28:04.53 server    SQL Server is starting at priority class 'normal'(1 CPU detected).
    2008-07-27 17:28:04.55 server    SQL Server configured for thread mode processing.
    2008-07-27 17:28:04.56 server    Using dynamic lock allocation. [2500] Lock Blocks, [5000] Lock Owner Blocks.
    2008-07-27 17:28:04.57 server    Attempting to initialize Distributed Transaction Coordinator.
    2008-07-27 17:28:18.76 spid3     Starting up database 'master'.
    2008-07-27 17:28:19.30 server    Using 'SSNETLIB.DLL' version '8.0.2039'.
    2008-07-27 17:28:19.32 server    SQL server listening on 1131.
    2008-07-27 17:28:19.32 spid3     Server name is 'SHILOH\SHILOH'.
    2008-07-27 17:28:19.32 spid5     Starting up database 'model'.
    2008-07-27 17:28:19.33 spid8     Starting up database 'msdb'.
    2008-07-27 17:28:19.33 spid9     Starting up database 'pubs'.
    2008-07-27 17:28:19.34 server    SQL server listening on TCP, Shared Memory, Named Pipes.
    2008-07-27 17:28:19.34 server    SQL Server is ready for client connections
    2008-07-27 17:28:19.34 spid10    Starting up database 'Northwind'.
    2008-07-27 17:28:20.30 spid5     Clearing tempdb database.
    2008-07-27 17:28:21.49 spid5     Starting up database 'tempdb'.
    2008-07-27 17:28:21.87 spid3     Recovery complete.
    2008-07-27 17:28:21.87 spid3     SQL global counter collection task is created.
    2008-07-27 17:28:42.29 spid3     SQL Server is terminating due to 'stop' request from Service Control Manager.


    Application Event log was full of the following error, but the cryptic message description wasn't of any help to us:

    Event Type:    Error
    Event Source:    MSSQL$SHILOH
    Event Category:    (3)
    Event ID:    19019
    Date:        7/27/2008
    Time:        5:28:19 PM
    User:        N/A
    Computer:    JORGEPC-CN1
    The description for Event ID ( 19019 ) in Source ( MSSQL$SHILOH ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: [sqsrvres] ODBC sqldriverconnect failed
    0000: 4b 4a 00 40 01 00 00 00   KJ.@....
    0008: 07 00 00 00 53 00 48 00   ....S.H.
    0010: 49 00 4c 00 4f 00 48 00   I.L.O.H.
    0018: 00 00 00 00 00 00         ......


    System Event log did show a very generic error about Cluster service failing to bring SQL Server service online:

    Event Type:    Error
    Event Source:    ClusSvc
    Event Category:    Failover Mgr
    Event ID:    1069
    Date:        7/27/2008
    Time:        5:27:38 PM
    User:        N/A
    Computer:    JORGEPC-CN2
    Cluster resource 'SQL Server (SHILOH)' in Resource Group 'SHILOH' failed.

    For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.


    It was clear that something related to the Cluster service was causing the SQL Server service to go offline.

    Windows Server 2003 includes two useful cluster log files where all cluster activity is logged on: cluster.log and cluster.oml. These two files can be located into the C:\Windows\Cluster folder. cluster.log is the file where all the cluster activity is collected in a very verbose way, while cluster.oml is where we can find mappings between globally unique identifiers (GUIDs) for each cluster resource and resource names (a kind of Rosetta stone to understand cluster.log file). You can find more information about these and other cluster log files at the Microsoft TechNet article How a Server Cluster Works.

    At first sight cluster.log can look very intimidating but we just need to take the specific time where the SQL Server event happened and search for that time on the cluster log. Be aware that cluster.log files are stored in GMT time while Windows Event Logs are stored at local time, depending on your time zone you will have to add/subtract one or more hours. In my case I am located on GMT+2 so I know I I have to look for events logged on cluster.log two hours in advance.

    I knew the first Application Event log took place around 17:28 so I did look for anything abnormal around 15h28m:

    0000067c.00000870::2008/07/27-15:28:07.524 INFO [Qfs] QfsCloseHandle 424, status 0
    0000067c.00000870::2008/07/27-15:28:07.524 INFO [Qfs] QfsDeleteFile C:\DOCUME~1\CLUADMIN\LOCALS~1\Temp\CLSA8.tmp, status 0
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] ODBC sqldriverconnect failed
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Native Client][SQL Server]Login failed for user 'CONTOSO\CLUADMIN'.
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] ODBC sqldriverconnect failed
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 08S01; native error = e9; message = [Microsoft][SQL Native Client]Named Pipes Provider: No process is on the other end of the pipe.

    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] ODBC sqldriverconnect failed
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 08S01; native error = e9; message = [Microsoft][SQL Native Client]Communication link failure
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] ODBC sqldriverconnect failed
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 08S01; native error = e9; message = [Microsoft][SQL Native Client]Named Pipes Provider: No process is on the other end of the pipe.

    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] ODBC sqldriverconnect failed
    00000718.00000ad4::2008/07/27-15:28:19.668 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 08S01; native error = e9; message = [Microsoft][SQL Native Client]Communication link failure
    00000718.00000ad4::2008/07/27-15:28:19.678 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] ODBC sqldriverconnect failed
    00000718.00000ad4::2008/07/27-15:28:19.678 ERR  SQL Server <SQL Server (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 08S01; native error = e9; message = [Microsoft][SQL Native Client]Named Pipes Provider: No process is on the other end of the pipe.

    There were plenty of errors (code ERR) on the cluster.log file, but usually the first ones are the more interesting to look for:

    (SHILOH)>: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Native Client][SQL Server]Login failed for user 'CONTOSO\CLUADMIN'.

    In our case CONTOSO\CLUADMIN was the Cluster service account use by Windows Cluster. As you probably know, this account needs to have access to SQL Server in order to perform the IsAlive and LooksAlive poll intervals to detect a resource failure.

    In this case the problem was that the BUILTIN\Administrators SQL Server account has been removed as part of a Company security guideline. In a default installation, removing this group from the SQL Server logins effectively removes the ability for the Cluster service to log into SQL Server and perform the IsAlive and LooksAlive checks. This is a fairly well known issue discussed on Knowledge Base article KB291255.

    In other cases you will see events in cluster.log related to some "cryptic" resources, like the following ones:

    0000067c.00000f5c::2008/07/27-15:27:40.070 INFO [FM] FmpOnlineResourceList: trying to bring resource c7018774-2673-4350-ad88-196e4aca3f95 online
    0000067c.00000f5c::2008/07/27-15:27:40.070 INFO [FM] OnlineResource: c7018774-2673-4350-ad88-196e4aca3f95 depends on 582210d0-c29d-40a5-af8a-bbbc08c3e08d. Bring online first.

    cluster.log does not usually show the resource name but the resource GUI (like "resource ce12eb0d-a9d0-4e4a-8531-f3d70cad8c6c"). By using the cluster.oml file we can translate the GUID string to a specific cluster resource:

    00000644.000006b0::2008/05/15-11:41:40.621 OBRENAME "Resource" "c7018774-2673-4350-ad88-196e4aca3f95" "SQL Server Fulltext (SHILOH)"

    00000664.000006b8::2008/05/15-11:36:29.428 OBRENAME "Resource" "582210d0-c29d-40a5-af8a-bbbc08c3e08d" "SQL Server (SHILOH)"

    In this cluster.log sample section Cluster service is reporting that it needs to start SQL Server service first in order to bring Full Text Search service online.

    Additional Information

    To make your sysadmin life easier, Microsoft has released a utility to simplify cluster.log analysis called Cluster Diagnostics and Verification Tool (ClusDiag.exe). This tool can be used to read in a more user-friendly way cluster log files from Windows Server 2003 and 2000 clusters. Here you can see a screen capture of this tool in action:


    In Windows Server 2008 cluster.log file is not present in Cluster folder anymore. Even though you can still access cluster.log file on a Windows Server 2008 cluster; please refer to the Overview of Failover Clustering with Windows Server 2008 document in this Microsoft web page for more information about the changes incorporated into Windows Server 2008 cluster. I have extracted here the relevant part of the text:

    The text-file-based cluster log is also gone. Event trace logging (.etl) is now enabled via Event Tracing for Windows (ETW). Default log sizes vary and can be modified with cluster installs logs (Operational and ClusterLog). There is new functionality built into the command line. The cluster.exe tool allows you to dump the trace log into a text file. This file looks similar to the cluster log used in previous versions of failover clustering. Use the cluster.exe Log /Generate command to see this log.

  • SQL Server Cast

    “Cannot open the datafile <path_to_file>” Error Accessing a Flat File in SSIS Using a Proxy Account


    I did find this error on a Windows Server 2008 and SQL Server 2008 installation but as you will see the error is not tied to an specific SQL Server or Windows version.

    In this case my customer was trying to run a SQL Server Agent Job with 17 different steps. These steps were of different types like running T-SQL code, ActiveX scripts, Operating System (CmdExec), etc. The customer was looking to run all these job steps using a Proxy account, which is a way to restrict the security context where a job step is run in SQL Server; here you can read the Proxy Account definition from MSND:

    SQL Server Agent lets the database administrator run each job step in a security context that has only the permissions required to perform that job step, which is determined by a SQL Server Agent proxy. To set the permissions for a particular job step, you create a proxy that has the required permissions and then assign that proxy to the job step. A proxy can be specified for more than one job step. For job steps that require the same permissions, you use the same proxy.

    When using the Proxy account, two different jobs steps were failing with the following error:

    The error when running SSIS-step (11,12) using a proxy is:
    10.0.2531.0 for 64-bit  Copyright (C) Microsoft Corp 1984-2005. All rights reserved.    Started:  15:27:31  Error: 2010-10-16 15:27:32.38     Code: 0xC020200E     Source: Copy Data from AgentSource to AgentDestination Task Flat File Source [1]     Description: Cannot open the datafile "V:\MySharedFolder\MyTextFile.txt".  End Error  Error: 2010-10-16 15:27:32.38     Code: 0xC004701A     Source: Copy Data from AgentSource to AgentDestination Task SSIS.Pipeline     Description: component "Flat File Source" (1) failed the pre-execute phase and returned error code 0xC020200E.  End Error  DTExec: The package execution returned DTSER_FAILURE (1).  Started:  15:27:31  Finished: 15:27:32  Elapsed:  0.687 seconds.  The package execution failed.  The step failed.

    In this case the job owner was the ‘sa’ account so we first tried to avoid the error by changing the job owner from ‘sa’ to the Proxy account but this did not fixed the problem. During the troubleshooting process I worked using a remote session with the customer’s server to try to determine where the error was coming from; during that session I noticed that the V: drive the job was pointing to did not exist on the server. The customer explained to me that the drive mapping was taking place during the first job step.

    The mapping of the V: drive was taking place on the security context of the Proxy account so we first check for the correct xp_cmdshell permission for this account, as described in this MSND entry:

    When it is called by a user that is not a member of the sysadmin fixed server role, xp_cmdshell connects to Windows by using the account name and password stored in the credential named ##xp_cmdshell_proxy_account##. If this proxy credential does not exist, xp_cmdshell will fail.
    The proxy account credential can be created by executing sp_xp_cmdshell_proxy_account. As arguments, this stored procedure takes a Windows user name and password. For example, the following command creates a proxy credential for Windows domain user SHIPPING\KobeR that has the Windows password sdfh%dkc93vcMt0.

    EXEC sp_xp_cmdshell_proxy_account 'SHIPPING\KobeR','sdfh%dkc93vcMt0'

    In our case the permissions required by the Proxy account were correctly assigned so we tried to map the network drive out of the job execution, using our own session instead of the Proxy account session, but the result and error was exactly the same. Nevertheless, both the customer and myself kept thinking that the problem was related to the mapping of this drive so we created a new Agent job with the following single step to isolate the problem (abridged):

    net use v: \\server\share\ /user:username /persistent:yes
    c:\program files (…) dtexec.exe package.dtsx (…)
    dir v:\*.* > c:\out.txt

    This test worked. The difference here was that we were mapping and running the package in the same job step while the customer was mapping the drive in the first step, and executing the SSIS tasks in the subsequent job steps. As explained in KB180362 article this was precisely the problem with the Agent job:

    When the system establishes a redirected drive, it is stored on a per-user basis. Only the user himself can manipulate the redirected drive. The system keeps track of redirected drives based on the user's Logon Security Identifier (SID). The Logon SID is a unique identifier for the user's Logon Session. A single user can have multiple, simultaneous logon sessions on the system.

    If a service is configured to run under a user account, the system will always create a new logon session for the user and then launch the service in that new logon session. Thus, the service cannot manipulate the drive mappings that are established within the user's other session(s).

    At the beginning of the same KB article you can also read:

    A service (or any process that is running in a different security context) that must access a remote resource should use the Universal Naming Convention (UNC) name to access the resource. UNC names do not suffer from the limitations described in this article.

    In our case the solution was as easy as replacing the mapping to the V: drive for the UNC path (i.e. \\servername\sharename). Another option in this scenario is to perform the mapping on the same job step as the one where the action takes place.

  • SQL Server Cast

    Import Export Wizard is not Copy Database Wizard


    During a recent work with one of our customers I did notice that it is not always clear what tools are available and required to copy SQL Server database objects, understanding “object” as a whole database or only part of it (table, view, user, etc.)

    As part of this work, my customer was trying to copy several databases between two different SQL Server instances but after several tests he was finding that although the tables were correctly copied, other objects as views and primary keys were not correctly transferred. It come to my attention that he was using the SQL Server Import Export Wizard to transfer the databases between one instance and another. Import Export Wizard can be launched from within SQL Server Management Studio (SSMS) or directly from a command prompt window typing DTSWizard.exe (further information about how to start Import Export Wizard can be found on this MSDN link)


    As indicated on the startup page on the Import Export Wizard, this tool can be used to copy data between different data sources and we have the option to recreate the destination database and tables, however this is not the right tool to use if we want to copy the whole database (not only tables, but all other objects) from one instance to another. The following Books Online link tries to clarify this difference:

    The purpose of the SQL Server Import and Export Wizard is to copy data from a source to a destination. The wizard can also create a destination database and destination tables for you. However, if you have to copy multiple databases or tables, or other kinds of database objects, you should use the Copy Database Wizard instead

    Copy Database Wizard will allow you to copy entire databases from one SQL Server instance into another (or the same, if you desire to do so) retaining the full database structure and objects. You can use two different methods to copy the databases and it is possible to keep the source database online while the copy is taking place, the following image shows these options:


    You can launch Copy Database Wizard directly from SSMS right-clicking on any database > Tasks > Copy database. You will see that this option (red) is side-by-side with the Import Export Wizard (green):


    Copy Database Wizard support the copy of databases between SQL Server 2000 and SQL Server 2005 providing an effective and simple way of upgrading databases between these two versions. For this wizard to work, SQL Server 2005 Service Pack 2 should be installed not only on the server where we are executing the tool but also on the destination server. For more information about this option please, refer to SQL Server Books Online.

    In summary, if you want to copy entire databases between different SQL Server instances, Copy Database Wizard is your tool. If you want to export data or import data from/into SQL Server or other data sources or just want to copy some tables, use Import Export Wizard. If you need to copy data and perform modifications during this process with a fine-grain control you will find a better solution by building and Integration Services package in Business Intelligence Development Studio.

  • SQL Server Cast

    Running a Maintenance Plan Using dtexec.exe Command Prompt Utility Does Not Perform Any Action

    SQL Server Maintenance Plans bring an easy way to DBAs to quickly setup routine maintenance activities in the database. Starting SQL Server 2005 maintenance plans are created as SQL Server Integration Services packages and could be executed in the same way as any other SSIS package:

    • Using Business Intelligence Development Studio (BIDS)
    • Using the SQL Server Import and Export Wizard
    • Using SQL Server Agent jobs
    • Using the graphical tool DTExecUI
    • Using the command prompt tool dtExec.exe

    In the case I want to discuss now I was trying to run a maintenance plan package using the dtExec.exe utility. My goal was to execute a backup task using this maintenance plan and running this from command prompt did not show any error but did not perform any backup either. This was the syntax I was using and the results:

    C:\>dtexec /SQL "Maintenance Plans\Backup MP" /Server SQL2005\YUKON
    Microsoft (R) SQL Server Execute Package Utility
    Version 9.00.3042.00 for 32-bit
    Copyright (C) Microsoft Corp 1984-2005. All rights reserved.

    Started: 15:21:20
    DTExec: The package execution returned DTSER_SUCCESS (0).
    Started: 15:21:20
    Finished: 15:21:51
    Elapsed: 31.046 seconds

    My first thought was, "do I need to have SSIS service running in order to execute this maintenance plan package"? but Microsoft Knowledge Base article KB942176 clearly states that SSIS service is not required for this:

    "If you only want to design and to execute SSIS packages, you do not have to start the SSIS service. When the SSIS service is stopped, you can run SSIS packages by using the following utilities:

      - The SQL Server Import and Export Wizard
      - The SSIS designer
      - The Execute Package utility (DTExecUI.exe)
      - The DTExec.exe command prompt utility"

    When you create a maintenance plan using SQL Server Management Studio you will see that the corresponding task or tasks associated with that maintenance plan are created as a SQL Server Agent job:



    You can easily find what specific dtexec.exe parameters are provided on the maintenance plan just by looking into the SQL Server Agent job properties, on the "Command line" option for the subplan definition:



    I checked that by running this sentence as the dtexec.exe parameter successfully run the backup job, in my case this is the command prompt instruction I used:

    DTEXEC.EXE /SQL "Maintenance Plans\Backup MP" /Server SQL2005\YUKON /SET "\Package\Subplan_1.Disable";false

    The "tricky" part here refers to the /SET switch. As you can find in the maintenance plan GUI each maintenance plan has one or more subplans associated and this/theses are disabled by default so you need to enable them by issuing the "false" option for the "Disable" property of this subplan. Remember that each subplan defined in the maintenance plan creates a job under SQL Server Agent.

    We can perform the same action right-into the SSIS package associated to the maintenance plan using SQL Server Business Intelligence Development Studio (BIDS):

      - Open BIDS
      - Create a new empty SSIS project
      - Add the existing maintenance plan package to the Project files list by using Solution Explorer; the maintenance plan will be added as a new SSIS Package
      - Modify the "Disable" property of the subplan so it is equal to "false"
      - Save the SSIS package as a maintenance plan in SQL Server
      - Run the maintenance plan without the need of specifying the /SET "\Package\Subplan_1.Disable";false switch

Page 1 of 3 (30 items) 123