• SQL Server Cast

    CA eTrust Antivirus reports infection by JS/Snz.A on sqlcast blog!


    Update (1st. January): CA has released a new eTrust definitions update that corrects the error (build 5421), apparently this was a false positive; thanks to a reader for reporting on this. No word from CA so far (issue number 16683491 01).

    I have discovered the following virus detection message appearing on Windows computers using CA eTrust antivirus software:



    This problem seems to appear using the latest eTrust antivirus update (version 31.3, build 5417).

    I have made a brand new installation of my PC with Windows Vista and eTrust only, updating both the OS and antivirus software with the latest updates. I have found that the virus detection pop-up appears whenever I browse to this blog web address.

    A quick search on the Web talks about false positives being reported by CA eTrust antivirus software on some javascript code but I have not been able to find a trusted source of information about this so far. CA does not currently report any information about this virus detection on the support web site.

    I have run two different online antivirus scanners, Windows Live OneCare and Trend Micro HouseCall and none have reported an infection.

    I have contacted CA Technical Support requesting more information about this issue. I will update the blog with a new post as soon as I have more information.

  • SQL Server Cast

    SQL Server Configuration Manager behavior in Clustered installation

    According to Books Online reference "SQL Server Configuration Manager is a tool to manage the services associated with SQL Server, to configure the network protocols used by SQL Server, and to mange the network connectivity configuration".

    One of the greatest things about SQL Server Configuration Manager (SSCM for the sake of brevity) is that you can manage different SQL Server settings in one tool, as opposed to SQL Server 2000 where different tools are needed to perform the same action. The following table summarizes the different configuration options in SQL Server 2000 and 2005:


    SQL Server 2000

    SQL Server 2005

    Client Networking

    Client Network Utility


    Server Networking

    Server Network Utility



    Service Manager


    Startup parameters

    Enterprise Manager


    However if you have made use of SSCM in a clustered installation you have probably noticed something weird...

    Managing Network Components

    Let's start our test with a SQL Server 2005 cluster with two cluster nodes: NEWYORK and BOSTON sharing a single SQL Server 2005 virtual instance called "SQL-2005":

    Suppose we want to enable Named Pipes network protocol on "SQL-2005" virtual instance using SSCM, we will see the following result in NEWYORK node after performing the change and restarting virtual SQL Server instance:

    Oddly enough, if we check now Named Pipes configuration on BOSTON cluster node we will see this protocol as disabled, even after restarting SQL Server service:

    This information can be confusing but this is the expected behavior for SSCM in a clustered instance. The changed network parameter is stored on cluster Quorum and is only refreshed by the rest of cluster nodes during the virtual instance failover. This does not mean we need to move the SQL Server cluster group to each owner, we just need to remember the information will not persist on other cluster nodes until the service is moved to the rest of possible owners, and therefore, the configuration shown by local SSCM will not be accurate.

    Managing Services

    Let's now look at how managing clustered SQL Server services looks in SSCM. On the following image we can find Cluster Administrator and SSCM side by side running on "NEWYORK" node:

    As you can see SQL Server instance "YUKON" is shown as Online in both Cluster Administrator and SSCM. Let's move now SQL Server 2005 Cluster Group resources from NEWYORK node to BOSTON node. After refreshing SQL Server services under SQL Server Configuration Manager we will see the services as Stopped:

    This is happening because SSCM relies on Windows Service Control Manager (SCM) to show the services status and from a local (non-cluster) point of view, SQL Server service is stopped on NEWYORK and running on BOSTON.

    If we want to know the status of SQL Server service from a cluster point of view (non-local) we will need to open the SSCM Microsoft management console using the SQL Server Network Name as parameter:

    SQLServerManager.msc /computer:SQL-2005

    This can be performed from any cluster node and will open a new console where SQL Server services status appear as it is actually shown in Cluster Administrator. This can also be used to check for the virtual server network configuration settings at cluster level.

    Obviously, this will only inform of the status of the services under the specified virtual server after the /computer switch ("SQL-2005" in our example) but will not provide accurate information for other possible instances. In our example, BOSTON cluster node contains an additional non-clustered instance called "YUKON2" which is currently stopped. If we open SSCM in BOSTON using the abovementioned command, we will see the following information:

    As we can see, while the information shown for virtual instance YUKON is accurate while the services status for local instance YUKON2 is not.

    In this post I have tried to explain the limitations you should be aware of when configuring a SQL Server 2005 cluster using Configuration Manager. The behavior explained here can be easily understood keeping in mind how Windows Cluster registry information is replicated among nodes (more information on KB174070). Remember that only using the virtual SQL Server instance as SSCM startup parameter will guarantee accurate information from this tool.

  • SQL Server Cast

    Reducing tempdb size in a SQL Server Cluster (after ALTER DATABASE execution)


    Some weeks ago a colleague from another company called me for help in fixing a database performance problem. The root cause was located on a possible contention issue on tempdb that was causing database to perform badly from time to time. He did find a match between the times of bad performance and the following messages in the SQL Server ERRORLOG file:

    2007-12-03 09:34:31.07 spid1     SQL Server has encountered 2 occurrence(s) of IO requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL$YUKON\Data\tempdb.mdf] in database [tempdb] (7).  The OS file handle is 0x00000530.  The offset of the latest long IO is: 0x0000008ef65a00

    2007-12-03 09:34:57.04 spid104   SQL Server has encountered 8186 occurrence(s) of IO requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL$YUKON\Data\tempdb.mdf] in database [tempdb] (2).  The OS file handle is 0x000005FC.  The offset of the latest long IO is: 0x0000004d242000


    They have already moved tempdb SQL Server database to a different system partition as recommended in situations where contention problems exist. They did also changed tempdb size from 250 MB to 2,500 MB, which was the maximum size observed for this database during peak time usage. Although the overall database performance was better now, it was not brilliant and the "IO requests taking longer than 15 seconds" messages continued popping up on ERRORLOG. They wanted now to try splitting tempdb in several files, which is also a recommended procedure in tempdb contention problems.

    Unfortunately the partition where tempdb was moved was only 10 GB in size and he was in the need of splitting tempdb in 8 different files (8 was the number of logical processors on the server). Creating 8 tempdb files of 2,500 MB each was not only impossible in this cased but hard to justify from a logical point of view.

    As you probably know, a database cannot be reduced to a size that is smaller that the current size using a simple ALTER DATABASE ... MODIFY FILE instruction. If you try to change tempdb system database size from a Query window you will see the following error message:

    Msg 5039, Level 16, State 1, Line 1
    MODIFY FILE failed. Specified size is less than current size.

    Surprisingly, when reducing the database size from SQL Server Management Studio, the GUI does not report any error, but the database size is not changed. This seems like a step back from SQL Server 2000 where the same action in Enterprise Manager caused the following error to be raised:

    Error 21335: [SQL-DMO]The new DBFile size must be large than the current size.

    (UPDATE: An in depth explanation of this behavior can be found on this MSDN blog)

    This error message is expected. For data and log files the new size should be larger than the current size. If we want to reduce the database size we have to shrink it and return the freed up space to the operating system (providing the database files can be shrunk). My first though was to point my colleague to Knowledge Base article KB307487 where the shrinking process for tempdb is explained, but while reading the article I discovered the following paragraph:

    "There are limitations for use of the DBCC SHRINKDATABASE command on the tempdb database. The target size for data and log files cannot be smaller than the size specified when the database was created or the last size explicitly set with a file-size changing operation such as ALTER DATABASE with the MODIFY FILE option or the DBCC SHRINKFILE command."

    They indeed had executed and ALTER DATABASE command with the MODIFY FILE option when they changed the original tempdb database size. Shrinking tempdb was not an option in this case. The only option was to execute a new ALTER DATABASE command with a smaller size than the current one, as specified in the "Method 1" section in the KB.

    This requires the SQL Server service to be started in minimal configuration mode with -c and -f startup options, but how to do this in a cluster? For standalone servers we have to execute the following instruction:

    sqlservr -c -f [ -s%InstanceName% ]

    where -s will be a required parameter for a named instance. If we are using a default instance (MSSQLSERVER) -s switch is not needed. If we have installed a SQL Server named instance the instruction to use under a standalone server would be:

    sqlservr -c -f -sMSSQL$YUKON

    where "YUKON" corresponds with the instance name. On a cluster installation, the instance named is compound using the "SQL Server Network Name" cluster resource. In my example the Cluster Administrator shows the SQL Server resources as follows:

    so the instruction to startup the SQL Server service would be:

    sqlservr -c -f -sSQL-2005\YUKON

    This instruction should be executed from a command prompt window (cmd.exe), the SQL Server service will start logging the ERRORLOG information on the console window. Of course, we can also add the -f startup option to the SQL Server service in SQL Server Configuration Manager (2005) or Enterprise Manager (2000) and restart; in this case we do not need to use -c. In SQL Server Configuration Manager we have to go SQL Server properties on Advanced tab, Startup Parameters while on SQL Server Enterprise Manager we go to the instance properties, General tab, Startup Parameters button.

    Note: From a cmd.exe window use the full path to sqlservr.exe adding -c and -f (with hyphen). If you use the NET START command use the SQL Server service name with the /c /f (with slash)

    Once SQL Server service is started, we will be able to change the tempdb database size. You can use sqlcmd command-line utility (or osql if you are running under SQL Server 2000) opening a new cmd.exe windows to perform this change. The following example will configure the initial tempdb database size to 70 MB:

    C:\>sqlcmd -S SQL-2005\YUKON
    1> USE master
    2> GO
    Changed database context to 'master'.
    1> ALTER DATABASE tempdb
    2> MODIFY FILE ( NAME = tempdev, SIZE = 70MB )
    3> GO
    1> exit

    Alternatively you can execute this same ALTER DATABASE instruction from a Query window in SQL Server Management Studio or Enterprise Manager. If you have used the GUI to change the startup options do not forget to remove the -f startup parameter from the SQL Server service option in Configuration Manager or Enterprise Manager once the change is done.

  • SQL Server Cast

    How to check Fragmentation on SQL Server 2005


    I have been asked this question several times: Is there a tool in SQL Server where I can easily check database fragmentation? The answer, as with many other things, would be "Yes" and "No".

    Fragmentation is a "natural" process in the database lifecycle. If the database is frequently updated via INSERT, UPDATE, or DELETE statements we can expect it to become fragmented over the time. If database indexes are fragmented, SQL Server query optimizer will take not-so-optimal decisions when using and index to resolve a query, affecting the overall query performance. SQL Server provides tools to check and fix database fragmentation but we need first to understand how to use these tools. This is something just a little bit more difficult than simply firing up Windows defrag.exe to check for filesystem fragmentation.

    Let's start with some theory so we can better understand what fragmentation is and how it affects database performance. There are two different types of fragmentation in SQL Server: Internal and External. Internal fragmentation is the result of index pages taking up more space than needed. It is like having a book where some of the pages are left blank; we do not know what pages are blank until we read the entire book and same applies for SQL Server, which has to read all the pages in the index wasting extra-time and server resources in the empty pages. External fragmentation occurs when the pages are not contiguous on the index. Following the book analogy, it is like having a book where pages are not ordered in a logical way (page 1, then page 2, then page 3 and so on) causing you to go back and forward to compound the information and make sense of the reading. Heavily used tables that contains fragmented indexes will impact your database performance. If you are still unsure about what external and internal fragmentation means, refer to this article published on SQL Server Magazine by Kalen Delaney (although written for SQL Server 2000, definitions are still valid for SQL Server 2005).

    In our minds we associate fragmentation to something bad that should be avoided at any cost but, is this always the case? Not with internal fragmentation. If your tables are frequently changed via UPDATE and INSERT operations, having a small amount of free space on the index or data pages (having a small amount of internal fragmentation) will cause a new page addition (page split) in order to allocate that new data. This leads ultimately to external fragmentation since the new added data page won't be probably adjacent to the original page. Internal fragmentation, therefore, can be desirable at low levels in order to avoid frequent page split, while external fragmentation, however, should always be avoided. Please understand that by 'low levels' I simply mean 'low levels'. The amount of free space that can be reserved on a index can be controlled using the Fill Factor.

    It is also important to understand that by external fragmentation we do not mean filesystem fragmentation or disk fragmentation.

    SQL Server 2005 introduces a new DMV (Dynamic Management View) to check index fragmentation levels: sys.dm_db_index_physical_stats. Although SQL Server 2005 still supports the SQL Server 2000 DBCC SHOWCONTING command, this feature will be removed on a future version of SQL Server. Here you can check the differences between both instructions when checking for fragmentation on the HumanResources.Employee table in the sample database AdventureWorks:

    • DBCC SWOWCONTING example:

    USE AdventureWorks;
    DBCC SHOWCONTIG ('HumanResources.Employee')

    DBCC SHOWCONTIG scanning 'Employee' table...
    Table: 'Employee' (869578136); index ID: 1, database ID: 6
    TABLE level scan performed.
    - Pages Scanned................................: 7
    - Extents Scanned..............................: 3
    - Extent Switches..............................: 2
    - Avg. Pages per Extent........................: 2.3
    - Scan Density [Best Count:Actual Count].......: 33.33% [1:3]
    - Logical Scan Fragmentation ..................: 14.29%
    - Extent Scan Fragmentation ...................: 33.33%
    - Avg. Bytes Free per Page.....................: 172.6
    - Avg. Page Density (full).....................: 97.87%

    • sys.dm_db_index_physical_stats DMV example:

    USE AdventureWorks
    SELECT object_id, index_id, avg_fragmentation_in_percent, page_count
    FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'), OBJECT_ID('HumanResources.Employee'), NULL, NULL, NULL);

    In this last example I have selected only relevant information to show from the DMV, you will see that DMV can provide much more details about the index structure. In case you wanted to show fragmentation details for all the objects in the AdventureWorks database, the command would be as follows:

    SELECT *
    FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'), NULL, NULL, NULL , NULL);

    Please, refer to SQL Server 2005 Books Online for more information on sys.dm_db_index_physical_stats syntax.

    How do we know if our database is fragmented? We have to pay attention to the avg_fragmentation_in_percent value. A value between 5-30% indicates moderate fragmentation, while any value over 30% indicates high fragmentation (book pages missing any order).

    The avg_page_space_used_in_percent is another value that it is worth to look closely. This value represent the amount of spaced used in the indexes. A value below 75% is usually associated to internal fragmentation (more blank pages on our book than recommended).

    In order to reduce fragmentation we will have to reorganize or rebuild the indexes. Choosing between reorganizing or rebuilding depends on the resultant values. For moderate fragmentation index reorganization will be enough, for heavily fragmented indexes a rebuild process is needed. The following table summarizes when to use each one (refer to this Books Online article for an in-depth coverage of this information):

    Reference Values (in %) Action SQL statement

    avg_fragmentation_in_percent > 5 AND < 30


    avg_fragmentation_in_percent > 30


    REORGANIZE statement is always executed online while REBUILD index is executed offline by default and can be optionally executed while database is accessed by users with the ONLINE statement, which can be a plus for 24x7 environments. This approach, which is only available in SQL Server 2005, has some limitations; refer to the ALTER INDEX statement in SQL Sever Books Online for more details.

    If you are looking for an easy way to automate these processes the SQL Server Books Online reference for the sys.dm_db_index_physical_stats contains a sample script you can implements within minutes. This script will take care of reorganizing any index where avg_fragmentation_in_percent is below 30% and rebuilding any index where this values is over 30% (you can change this parameters for your specific needs). Add a new SQL Server Execute T-SQL statement task to your weekly or daily maintenance plan containing this script so you can keep you database fragmentation at optimum level.

Page 1 of 1 (4 items)