SQL in Stockholm

SQL Server thoughts from the support team in Stockholm

SQL Server troubleshooting thoughts from Graham Kent, (ex) Senior Support Escalation Engineer at Microsoft Sweden.

  • SQL in Stockholm

    New SQL Server 2008 DMV which I noticed this week - sys.dm_os_sys_memory

    • 1 Comments

    In the continuing series of "things I notice in SQL Server which I didn't know were there", I noticed the very simple but very useful sys.dm_os_sys_memory earlier this week.

    Why do I like this? Well obviously you can extract the memory information exposed by the OS in a number of different places, but as much as anything, I like the simple record of the state of the CreateMemoryResourceNotification Function , from the system_low_memory_signal_state column in the DMV.

    We've added this into the internel PSSDIAG / SQLDIAG perfstats scripts for 2008 (which is where I noticed it). For those of you that use proactive monitoring and record the state of your system over time, this is a good simple set of data to record.

  • SQL in Stockholm

    Unable to use forced parameterization when deploying plan guides in SQL Server 2005

    • 1 Comments

    I was working a case last week where I wanted to instate some fairly simple plan guides for a query that had picked a bad plan on a particular deployment of an application. I was thinking it was going to be an easy process. The query in question executed several tens of thousands of times, and had 2 parameters being generated by the application. On the server I was analyzing the performance had degraded due to a bad plan, but I had a trace from another server using the same application, where I had captured the good plan. This ought to have been a classic case for a plan guide (since I couldn’t change the SQL batch coming from the application, and the server had been managed properly so I couldn’t address the issue by simple update stats or some such). Anyway it drove me mad for a few hours as I couldn’t get it to work, so thought I would share, what turned out to be, the simple reason why :-)

    I needed to do 2 things:

    1. Create a plan guide which forced parameterization of the query, since I had noted it wasn’t being parameterized by default.

    2. Create a plan guide which forced a specific plan on the parameterized form of the query

    Both of these are fairly standard steps, step1 is documented here:

    Specifying Query Parameterization Behavior by Using Plan Guides

    and step 2 is documented here:

    Using the USE PLAN Query Hint

    I’ve done this many times before, but whatever I did, I couldn’t get step 1 to fly, so I couldn’t go any further. After much double checking of what I was doing (including trying to set forced parameterization at a database level), I decided to take a step back and check out the pre-requisites of forced parameterization at a database level. Books Online listed exceptions where parameterization could not be forced, so I went back and checked them all, even though my SQL batch seemed extremely simple. I found that there were 2 problems:

    a) Deprecated query constructs

    b) Any statement that is run in the context of ANSI_PADDING or ANSI_NULLS set to OFF

    It turned out that the connections that the application was using were issuing SET ANSI_NULLS statements manually, just after they connected. This statement itself it deprecated meaning that parameterization was a no no, and ironically it meant that doubly since ANSI NULLS OFF also stops this occurring.

    A little something to watch out for.

  • SQL in Stockholm

    How to Customise SQLDIAG in SQL Server 2008 to collect DBCC MEMORYSTATUS

    • 1 Comments

    Although Books Online makes some mention of SQLDIAG customizations, it doesn’t any explicit examples, so here’s one for you. I documented previously here about how to customize your XML configuration file to connect to different machines, and collect output from the standard collectors, and I also mentioned how you could download the PerfStats scripts and use the XML files that they came supplied with. The following example though shows how to insert a custom script of your own, which can be anything of your choice. I’ll be using DBCC MEMORYSTATUS output to demonstrate. The thing to remember here is that I’m not saying that this is the only way to collect such data, but that as ever, with SQLDIAG it allows you to do everything in one place, in a controlled and easy to administer way.

    Firstly we need a script which will collect DBCC MEMORYSTATUS repeatedly. The following one won’t win any design prizes but will do the job just nicely:

    WHILE 1=1

    BEGIN

    select 'start time:', GETDATE();

    DBCC MEMORYSTATUS

    WAITFOR DELAY '00:00:10'

    END;

    This will run DBCC MEMORYSTATUS every 10 seconds until you kill it.

    Next we need to create an XML configuration to run this script using SQLDIAG. I’ll take the default SQLDIAG.XML which unpacks when you run the sqldiag.exe for the first time. If you need any explanations of these pre-requisites then go back to the SQLDIAG link above and read the background.

    <CustomDiagnostics>
    <CustomGroup name="msinfo" enabled="true" />
    <CustomTask enabled="true" groupname="MsInfo" taskname="Get MSINFO32" type="Utility" point="Startup" wait="OnlyOnShutdown" cmd="start /B /WAIT MSInfo32.exe /computer %server% /report &quot;%output_path%%server%_MSINFO32.TXT&quot; /categories +SystemSummary+ResourcesConflicts+ResourcesIRQS+ComponentsNetwork+ComponentsStorage+ComponentsProblemDevices+SWEnvEnvVars+SWEnvNetConn+SWEnvServices+SWEnvProgramGroup+SWEnvStartupPrograms" />
    <CustomTask enabled="true" groupname="MsInfo" taskname="Get default traces" type="Copy_File" point="Startup" wait="OnlyOnShutdown" cmd="&quot;%sspath%log\log*.trc&quot; &quot;%output_path%&quot;" />
    <CustomTask enabled="true" groupname="MsInfo" taskname="Get SQLDumper log" type="Copy_File" point="Startup" wait="OnlyOnShutdown" cmd="&quot;%sspath%log\SQLDUMPER_ERRORLOG*.*&quot; &quot;%output_path%&quot;" />
    </CustomDiagnostics>

    If you look in the above section we already have one custom group, called MSINFO32 , which has 3 sub-tasks. (The format of these if worth noting if you wish to do any file manipulations and command line tools.) What we want to do is add another CustomGroup, which will contain a single task, which is our DBCC script from above. The script itself needs to be saved into a file for the collector to access it, so in this scenario we’ll save the script into the same root directory as the sqldiag.exe. If you’re planning on doing this for many different scripts and scenarios you might be better organising these in appropriately named sub directories. So I’ve saved my script above as the following

    C:\Program Files\Microsoft SQL Server\100\Tools\Binn\exec_dbcc_memorystatus.sql

    Then I need to add the following XML to tell the SQLDIAG that I have a new custom group of diagnostics (called SQL memory scripts), which contains currently 1 custom task (out DBCC MEMORYSTATUS script). The new section looks like this:

    <CustomDiagnostics>
    <CustomGroup name="msinfo" enabled="true" />
    <CustomTask enabled="true" groupname="MsInfo" taskname="Get MSINFO32" type="Utility" point="Startup" wait="OnlyOnShutdown" cmd="start /B /WAIT MSInfo32.exe /computer %server% /report &quot;%output_path%%server%_MSINFO32.TXT&quot; /categories +SystemSummary+ResourcesConflicts+ResourcesIRQS+ComponentsNetwork+ComponentsStorage+ComponentsProblemDevices+SWEnvEnvVars+SWEnvNetConn+SWEnvServices+SWEnvProgramGroup+SWEnvStartupPrograms" />
    <CustomTask enabled="true" groupname="MsInfo" taskname="Get default traces" type="Copy_File" point="Startup" wait="OnlyOnShutdown" cmd="&quot;%sspath%log\log*.trc&quot; &quot;%output_path%&quot;" />
    <CustomTask enabled="true" groupname="MsInfo" taskname="Get SQLDumper log" type="Copy_File" point="Startup" wait="OnlyOnShutdown" cmd="&quot;%sspath%log\SQLDUMPER_ERRORLOG*.*&quot; &quot;%output_path%&quot;" />
    <CustomGroup name="SQLscripts" enabled="true" />
    <CustomTask enabled="true" groupname="SQLscripts" taskname="dbcc memorystatus" type="TSQL_Script" point="Startup" wait="No" cmd="exec_dbcc_memorystatus.sql"/>
    </CustomDiagnostics>

    Next I save this XML file as a new name, let’s say SQLDIAG_memstatus.XML and then I just execute the sqldiag tool with a input file switch like this:

    sqldiag /ISQLDIAG_memstatus.XML

    and away we go. If you look in your output directory, you’ll find the following file:

    <yourservername>__exec_dbcc_memorystatus_Startup.OUT

    which contains all the DBCC MEMORYSTATUS output.

    All you need to do now, is create custom XML files and scripts for as many scenarios as you can think of! Hopefully if you examine the entries for the MSINFO task, this might give you a few ideas as well.

  • SQL in Stockholm

    TechDays Sweden: post presentation questions and follow ups

    • 0 Comments

    Following my presentation here in Västerås yesterday there were a few people waiting to speak to me who I didn't have time for. If you were one of those people and want to ask some questions, you're more than welcome to post them here in the comments, or contact me through the contact section on this site, which comes direct to my work mail box. I'll attempt to follow up on everything, I just can't promise any timescales.

  • SQL in Stockholm

    Come and say Hi at TechDays Sweden tomorrow....

    • 0 Comments
    I'll be presenting (my usual mantra on using SQLDIAG to troubleshoot your own problems) at 16:15 tomorrow. Also the support team will be running a desk where they'll be 3 or 4 of us at all times to answer general support related questions. I've got four 90 minute shifts on there, so come over and have a chat if you're not running off to loads of other talks.
  • SQL in Stockholm

    sys.dm_exec_query_stats DMV can report invalid information for parallel plans

    • 1 Comments

    Elisabeth Redei asked me to respond to a question earlier today following a comment on her blog post about performance monitoring using DMVs. The question asked was

    "when will sys.dm_exec_query_stats be fixed to report worker time (cpu) for queries with parallel execution plans?"

    The answer to this is that we have already fixed it, but the code has been checked into the next version of SQL Server (i.e. the next major release after SQL 2008). When it was fixed we also fixed another problem where the same DMV can report NULL in the dbid and objectid columns under certain circumstances.

    Currently there are no plans to backport this fix into SQL 2005 or 2008.

  • SQL in Stockholm

    New SQL Blogger - a must read

    • 1 Comments

    An ex-colleague of mine Elisabeth Redei, has just opened a new blog at http://sqlblog.com/blogs/elisabeth_redei/default.aspx 

    Elisabeth is extremely knowledgable and has spent many years in Microsoft SQL Server support roles. You should definately add her to you reading list, if you haven't already seen her at sqlblogs.com

    She's just posted a good article here about statistics:

    http://sqlblog.com/blogs/elisabeth_redei/archive/2009/03/01/lies-damned-lies-and-statistics-part-i.aspx

  • SQL in Stockholm

    No SQL posts this week - skiing again!

    • 0 Comments

    I am fortunate enough to be contining the testing of my hypothesis that skiing in Sweden is better than SQL Server support. This week I am testing this theory in Sälen, near the Sweden - Norway border. So far it's all going very much to plan.

    View from my bedroom window in Sälen

    enjoy the rest of your February. I shall return to SQL work on 2nd March :-)

  • SQL in Stockholm

    Kill with rollback estimation

    • 1 Comments

    It's always amazing when you find something new in something so common that you thought you knew so well. When resolving the compilation bottleneck issue I mentioned in the previous 2 posts, I had to use the kill command. It's actually incredibly rare that I use this command, as I really don't believe in it in principal in production if at all avoidable. Anyway in that particular situation it just had to be done. The session that we were killing was doing a huge update, and as such it had to rollback and it was going to take a while (I could guess this from the fact that it had 39 million locks!). However I'd never noticed that the kill command now comes with a rollback estimation syntax. Definately a good feature, but not one to abuse :-)

    From BOL, it looks like this:

    KILL 54 WITH STATUSONLY;
    GO

    --This is the progress report.
    spid 54: Transaction rollback in progress. Estimated rollback completion: 80% Estimated time left: 10 seconds.

  • SQL in Stockholm

    Compilation Bottlenecks – Error: 8628, Severity: 17, State: 0. – Part 2

    • 1 Comments

    [follow up to RESOURCE_SEMAPHORE_QUERY_COMPILE diagnosis]

    Last week I wrote about the initial diagnosis of this problem, and now here's the follow up. So if you remember we had a DBCC MEMORYSTATUS and a load of other PSSDIAG data collected and we knew that the procedure cache seemed to be constantly flushing as fast as we put stuff into it.

    Now when you come to a situation like this, there's no perfect way to say, here's where to look for the answer. I certainly checked a few different things before I noticed the problem. I guess the bottom line here is that we have some data, we've collected a benchmark of when the system is operating normally, and we just need to review all of this to find something abnormal. We know in this situation that the cache is being flushed, and so we're looking for events or circumstances which potentially use up lots of buffer pool memory, or causes memory flush. One example of this I referred to in this earlier post, so I checked the error logs for this type of situation - nothing doing.

    So being as we're thinking memory related problems, I used some text tools to compare the output from current and previous DBCC MEMORYSTATUS. Out of interest my preferred text tool of note currently is notepad++, I can't say enough good things about this tool.....

    So in doing so I noticed the following significant change in one of the many values that the DBCC command collects.

    Problem occurring snippet:

    OBJECTSTORE_LOCK_MANAGER (Total)                                  KB
    ---------------------------------------------------------------- --------------------
    VM Reserved                                                     131072
    VM Committed                                                    131072
    AWE Allocated                                                   0
    SM Reserved                                                     0
    SM Commited                                                     0
    SinglePage Allocator                                            5769576
    MultiPage Allocator                                             0

    Baseline snippet:

    OBJECTSTORE_LOCK_MANAGER (Total)                                  KB
    ---------------------------------------------------------------- --------------------
    VM Reserved                                                     131072
    VM Committed                                                    131072
    AWE Allocated                                                   0
    SM Reserved                                                     0
    SM Commited                                                     0
    SinglePage Allocator                                            33232
    MultiPage Allocator                                             0

    In our baselines, the single page allocator for the lock manager consistently shows unless 50000KB, even in times of extreme user load. However now we suddenly have 5769576KB or about 5.5GB....so this would pretty much explain where all our memory went. We monitored this situation for a few minutes and this value was consistently climbing, so it seemed likely that a rogue process or user was out there issuing one a fairly crazy query.

    So all we had to do now was to identify this person or batch. There were about 3000 connections to this system, and so the easiest way I could think of, which I mentioned in a very short post at the time, was to run a query against the sys.dm_tran_locks DMV and count the total number of locks by sessionid. This showed that one particular session was holding over 39 million key locks, which pretty much explained the problem!

    After this it was pretty simple to track down this user, ask them what they were doing, and stop them doing it. Once this batch was terminated the lock manager released its memory and everything soon went back to normal.

  • SQL in Stockholm

    SQL Server Compilation Bottlenecks – Error: 8628, Severity: 17, State: 0. – Part 1

    • 2 Comments

    [alternative title: Memory Troubleshooting using DBCC MEMORYSTATUS]

    The following samples are taken from a live problem I was working on for most of last week. The steps might well help you resolve an occurrence of the above error, but the methodology might help you with some general performance and memory troubleshooting as well, hence the alternative title.

    Symptoms:

    • A previously well running server starts exhibiting extremely slow performance for all users during the online day.
    • CPU time is maxed out at 100%
    • The issue is intermittent and has no correlation to batch business processing or periods of peak usage

    Initial Actions:

    • Configure a PSSDIAG (but could just have easily have been SQLDIAG) to collect perfstats script, perfmon counters
    • Don’t include profiler trace to begin with, as the server is already maxed out for CPU and you might skew the results or make the situation worse, especially as going into the problem you don’t actually know what the problem is related to
    • Configure the collection for long term running, as you don’t know when the error will next occur

    Initial Results

    • When reviewing the error logs you note that Error: 8628, Severity: 17, State: 0. was logged repeatedly when the problem occurred in the past.
    • RESOURCE_SEMAPHORE_QUERY_COMPILE  is the top aggregate wait stat in sys.dm_os_wait_stats
    • The server was clustered and a failover occurred when the problem last happened
    • No other items of note were recorded in the past
    • When the issue is not occurring everything looks fine on the server, there are no indications of dangerously long running queries or memory pressure etc.

    This is a common scenario that I come across. You know something bad has happened, you can reproduce it on demand, but you suspect it will happen again. At this point I always like to do the above, review the general health of the server with PSSDIAG / SQLDIAG and then do another round of monitoring, once I know what I’m looking for. In this case I’m targeting the 8628 error as it seems pretty conclusively tied to the outage.

    One of the principal hits you get if you search microsoft.com sites for error 8628 is the following page

    How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005

    http://support.microsoft.com/kb/907877

    and it points out that this error is thrown when there are too many queries trying to compile simultaneously, and that the gateway structures are throttling the compilations. You can read more background about this theory in the above article and there’s also some reference to it in the following white paper

    Troubleshooting Performance Problems in SQL Server 2005

    http://technet.microsoft.com/en-gb/library/cc966540.aspx

    which includes an excellent section on internal physical memory pressure.

    So now we’re on the track of the problem. We reconfigure the PSSDIAG to include a script which runs DBCC MEMORYSTATUS regularly so that we can be sure to capture this output when the problem next occurs. We also make sure that we keep a copy of the data from this query when the system is running well, so that we can have a baseline to compare against. At this point I still didn’t have a profiler running as I didn’t want to have to manage the disk space and have the performance overhead, and the perfstats script add-in gives you so much information anyway.

    So the problem eventually occurred again, and here are some key outputs from the DBCC MEMORYSTATUS related to the compilation gateways:

    Baseline:

    Procedure Cache                Value     
    ------------------------------ -----------
    TotalProcs                            7614
    TotalPages                          570060
    InUsePages                           32736

    Small Gateway                  Value              
    ------------------------------ --------------------
    Configured Units                                 32
    Available Units                                  32
    Acquires                                          0
    Waiters                                           0
    Threshold Factor                             500000
    Threshold                                    500000

    Problem Occurring:

    Procedure Cache               Value
    ------------------------------ -----------
    TotalProcs                     397
    TotalPages                     66888
    InUsePages                     44924

    Small Gateway                 Value
    ------------------------------ --------------------
    Configured Units               32
    Available Units                0
    Acquires                       32
    Waiters                        59
    Threshold Factor               500000
    Threshold                      500000

    So what we can see here is that we are (as expected) encountering a big bottleneck on the gateways (the small one in this case) and at this moment in time 59 compilations are queued. What we can also note is that the reason that this appears to be the case is that the procedure cache is significantly smaller than its normal size, about 85% smaller in fact.

    This therefore, is why so many queries are suddenly trying to compile. For reasons currently unknown at this point 85 % of the cached plans have been thrown away from the cache and so the many thousands of connections on this system are now all having to compile new plans.

    Now you might be thinking, sometimes you see this type of behaviour when the cache is flushed. This is true, but in this case this wasn’t the problem. We sat and monitored the output above for quite a while, there was constant compilation queues but the cache was not growing. More to the point there were no cache flush events in the error logs, and we had not performed and maintenance operations which cause cache flush. I talked about a classic example of this behaviour in this post.

    So we need to look elsewhere, and ask ourselves the question, “Why is my procedure cache 85% of its normal size?”

    That’s for my next post.

  • SQL in Stockholm

    Which sessionid has the most locks in a SQL Server instance

    • 2 Comments

    Quick post today. Was playing around with something on Friday night, thinking how to answer this question quickly whilst looking a live performance problem. I have a huge selection of ready to go DMV based queries, aimed at targetting all types of performance problems, but this was one I didn't have. The answer of course is beautifully simple through the magic of DMVs. Do you remember when these didn't exist? Sometimes it's easy to forget :-) The answer and a good one to have in your troubleshooting code toolbox:

    select request_session_id, count(*)

    from sys.dm_tran_locks

    group by request_session_id

  • SQL in Stockholm

    Thoughts about building a SQL Server 2008 cluster in Hyper-V

    • 1 Comments

    Having built a SQL Server 2005 cluster on virtualized hardware, using Virtual Server, i thought it was time to do this on hyper-v with SQL 2008. Here’s some thoughts to get you thinking about trying this. The one single most important point here, is how much easier the whole process it with Windows / SQL 2008. It just works. If you’re like me and just playing with it in test, then there are tons of simple wizards to speed you through many of the tasks.

    When I did the install on virtual server, I used this article as a starting point for building the actual cluster:

    Using Microsoft Virtual Server 2005 to Create and Configure a Two-Node Microsoft Windows Server 2003 Cluster

    http://www.microsoft.com/technet/prodtechnol/virtualserver/deploy/cvs2005.mspx

    I guess the equivalent article for Windows 2008 Hyper-V is this one

    Hyper-V Step-by-Step Guide: Hyper-V and Failover Clustering

    http://technet.microsoft.com/en-us/library/cc732181.aspx

    which is a good starting point. When I actually did this I did it without following any specific instructions, just to see if I could get it to work, and learn a few things as I went along, troubleshooting anything I came across myself. This is always my preferred way to learn. In my scenario I used iSCSI drives hosted on the DC.

    This in no way creates a scalable solution that you should use in production, it’s just something to knock together quickly to test stuff with, so I can play with a clustered SQL 2008 instance.

    The steps are in the attached document.

  • SQL in Stockholm

    Hypothesis Confirmed - Skiing in Sweden is better than SQL Server Support

    • 0 Comments

    So as per my post on 2nd January, I'm happy to confirm that skiing in Sweden is definitely better than SQL Server support, however much I do enjoy my job at times :-) Here's a picture of me learning to cross country in Klövsjö, near Vemdalen in central Sweden.

    I'm a fairly experienced downhill skier, but what with my love of long distance walking and trekking, cross country seemed the next obvious step, and frankly, it was bloody great

    graham_klovsjo

    The day we arrived it was minus 34, yes thirty four, now that's cold. Fortunately it didn't stay like that all week and good skiing fun was had by all.

    I have nothing SQL Server to say today, apart from the fact that I've installed a Hyper-V based SQL 2008 cluster this week in my spare time, and I might post some details later on. It was an interesting experiment.

    Currently though I'm just dreaming of more skiing and luckily enough I'm going to Sälen for a week in just 4 weeks time.

    Nice!

  • SQL in Stockholm

    Problems executing maintenance plans in SQL Server 2008 - Resolved by SQL 2008 CU#3

    • 0 Comments

    In November I wrote here about problems executing maintenance plans due to Integration Services edition compatibility issues. Last night we released cumulative update #3 for SQL Server 2008 RTM, and it contains a fix / design change for this problem. You can read about it and download the fix here :

    Cumulative update package 3 for SQL Server 2008

    http://support.microsoft.com/kb/960484

    FIX: Error message when you run a maintenance plan in SQL Server 2008: "The SQL Server Execute Package Utility requires Integration Services to be installed"

    http://support.microsoft.com/kb/961126/

  • SQL in Stockholm

    Error 29528. The setup has encountered an unexpected error while Setting Internal Properties

    • 0 Comments

    We've just published a change to KB925976 on how to work around the following setup error:

    Error 29528. The setup has encountered an unexpected error while Setting Internal Properties. The error is: Fatal error during installation.

    We've also just published KB961662 which details how to workaround a problem you can encounter if you had used the original version of 925976.

    Msg 7644, Level 17, State 1, Line 2
    Full-text crawl manager has not been initialized. Any crawl started before the crawl manager was fully initialized will need to be restarted. Please restart SQL Server and retry the command. You should also check the error log to fix any failures that might have caused the crawl manager to fail.

    These problems are related to SID management and the corresponding registry keys. In the earlier version of KB925976 there was advice to delete a registry key containing corrupted SIDs. This workaround was successful for all scenarios in fixing the setup error and allowing setup to complete, but it potentially causes problems with FTS as noted above. The new version of 925976 provides a more future proof solution.

    What I also learnt whilst working on these articles was that there is a public sysinternals tool to get SIDs when you need to do maintenance such as this. You can download it here:

    PsGetSid v1.43 - http://technet.microsoft.com/en-us/sysinternals/bb897417.aspx

    This can be useful for many other scenarios and is worth remembering.

  • SQL in Stockholm

    Sample Application to test database mirroring failover

    • 1 Comments

    Here's the source for a tiny C# app I wrote to test database mirroring failover at the application level, including a sample database mirroring connection string. It assumes that you have already set up a mirroring partnership. It doesn't matter whether the partnership includes a witness or has auto or manual failover.

    I'll let you create your web or winform project as you see fit, and therefore have only included the actual C# code from the button click event, but basically you need a form with 3 controls:

    button1 - a button

    textBox1 - a multiline text box

    testBox2 - a text box

    In addition to this you need to create a stored procedure in your mirrored database called 'dbmtest' and get it to fetch something that can be converted to a string. The code from my procedure is also shown at the end, but it's not exactly rocket science!

    Here's a quick summary of what the event does:

    1. When you click the button, it attempts to connect to the server supplied in the connection string and call a stored procedure to get some data.

    2. If it gets the data, it displays it. If it gets a SQL Exception error it displays this.

    3. it attempts to display in a separate text box which server it is connected to, using the SqlConnection.DataSource property.

    4. If you failover your mirror you will receive an error when you click the button

    5. Then re-try by clicking again and you will connect to the other server automatically.

    6. If any other general errors are caught it shows them in the text box.

    Obviously in a real world situation you would make the re-try method somewhat more graceful, but this serves my purposes :-) It's nice sometimes to stick breakpoints in and see what is happening at what stages across the components.

    Here's the code:

    private void button1_Click(object sender, EventArgs e)
            {
                try
                {
                    SqlConnection dbmconn = new SqlConnection();
                    dbmconn.ConnectionString = "server=server1;initial catalog = mymirrordb;failover partner=server2;user=domain\\user;trusted_connection = true";
                    SqlCommand cmd = new SqlCommand();
                    cmd.Connection = dbmconn;
                    cmd.CommandType = System.Data.CommandType.StoredProcedure;
                    cmd.CommandText = "dbmtest";

                    try
                    {
                        SqlDataReader dbreader;
                        dbmconn.Open();
                        dbreader = cmd.ExecuteReader();

                        while (dbreader.Read())
                        {
                            textBox1.Text = "Data fetched = " + dbreader[0].ToString();
                            textBox2.Text = dbmconn.DataSource.ToString();
                        }
                    }
                    catch (SqlException sqlex)
                    {
                        textBox1.Text = sqlex.Message.ToString();
                    }
                    finally
                    {
                        dbmconn.Close();
                    }
                }
                catch (Exception ex)
                {
                    textBox1.Text = ex.Message.ToString();
                }
            }

    Stored Procedure Code

    create procedure dbmtest

    as

    select top 1 blog_data
    from blog

     

  • SQL in Stockholm

    Starting SQLDIAG or PSSDIAG automatically using SQL Alerts

    • 0 Comments

    This is something I was thinking about this week whilst chatting with a customer. I've done it before on occasions, but never really documented it. Consider the fact that you have a problem which occurs extremely intermittently, let's say on average every 3 months for the sake of argument. Based upon your initial analysis you decide you need to collect a SQLDIAG when the problem occurs, but for reasons specific to your environment, you do not want to employ the long term monitoring capabilities of SQLDIAG, such as "continuous mode" (/L switch), running a service node (/R and /U switches to register or unregister). So the question is that you want to start the collector when a condition is detected, let's say, a blocking chain occurs.

    I've just implemented this in a test system using the following methodology and without installing the SQLDIAG as a service:

    1. Configure your SQLDIAG as per your data capture requirements.

    2. Create a new alert, in this scenario I was looking for blocking chains, so I set it with the following properties

    Type - SQL Server Performance Condition Alert
    Object - ServerName:General Statistics
    Counter - Processes Blocked
    Alert if counter - rises above 5
    Options - delay between responses = 60 minutes (this one is optional but just stops you creating lots of multiple sessions and filling up disk space etc - alternatively you could just set the alert to disabled as soon as it has run once)

    3. For the response to the alert, create a new job

    4. This is a job which will start SQLDIAG from a cmdexec session, name it as such

    5. Create a new step, type = Operating system (cmdexec) - I used the following script

    start /min cmd.exe /C"sqldiag /E +00:20:00 /Q"

    bear in mind that the SQLDIAG directory is in my path variable, so you might alternatively have to type in the full executable location, which considering its length is why I have it as a path variable!

    This string translates as, start sqldiag in quiet mode, meaning that it will automatically overwrite previous output in the default output directory and shut it down after a time span of 20 minutes. You can change the switches you use (of and of course what data you actually collect) using the standard SQLDIAG methodology.

    6. Confirm and save everything and away you go.

    This particular example is easy to test as you can just create a simple blocking chain on a test table, and watch the collector start up. You can open the sqldiag.log or the console.log files from the output directory and watch its progress.

    This is not the only or the definitive way to do this, for example I have seen people register the collector as a service, and then get their monitoring software to issue a net start command. However it's one way to do it and keep it completely within the boundaries of SQL Server.

    (It also works for PSSDIAG if you're working with us using our internal version.)

     

  • SQL in Stockholm

    Come and meet the Stockholm PSS teams on 17th and 18th March

    • 0 Comments

    Myself and several colleagues from various support teams (not limited to SQL) will be at Microsoft tech days on 17th and 18th March 2009 at Västerås in Sweden. We'll be doing a few mini presentations on the side of the main event, and just hanging out to meet and chat with anyone who wants to come by. If you're going to the event, please drop by and say Hi.

     You can read about the event at www.microsoft.se/techdays

     

  • SQL in Stockholm

    Is Skiing in Sweden better than SQL Server support?

    • 0 Comments

    Of course it is! So, no posts for a few weeks right now as I'm hanging at home in Stockholm enjoying some family time and general relaxation. It's been below zero for about 3 weeks now here, and the skies have been stunningly clear and beautiful, leading to innumerable amazing sunrises and sunsets. There's no natural snow locally though, just a lot of frozen lakes.

    Therefore on sunday we're all heading for Vemdalen in the center of Sweden for some skiiing fun. I'd like to tell you that I'll be thinking lots about Katmai CU3 and it's upcoming release and other such matters, but in all truth I won't :-)

    Apparently it's gonna be minus 20 when we arrive, so we're getting ready for some real winter fun. Then sometime after the 11th I'm gonna have to start thinking about databases again :-)

  • SQL in Stockholm

    Changes to management of 'TokenAndPermUserStore' in SQL 2005 SP3

    • 0 Comments

    SQL Server 2005 service pack 3 released today and you can download it from here:

    http://www.microsoft.com/downloads/details.aspx?FamilyID=ae7387c3-348c-4faa-8ae5-949fdfbe59c4&displaylang=en

    One of the most interesting and useful bits which I am interested in (apart from the bug fixes rolled up from all the cumulative updates) is the new ability to manage to the 'TokenAndPermUserStore' cache with more control. Many people have come across these performance problems, which I won't recount in detail as they are well documented here and here.

    However one of the great new features in SP3 is the ability to control the size and entry count of this cache yourself, much like the feature that was introduced in SQL Server 2008 as part of the sp_configure settings.

    In SQL 2005 SP3 you have to use a trace flag and some registry entries as opposed to sp_configure, but the behavior is much the same, and should allow people to workaround this problem in a less aggressive way than just clearing out the whole cache. The exact details of how to do this are documented here:

    How to customize the quota for the TokenAndPermUserStore cache store in SQL Server 2005 Service Pack 3 - http://support.microsoft.com/kb/959823/

     

     

  • SQL in Stockholm

    SQL Server Database Mirroring error 1443 - Connection Handshake Failed

    • 0 Comments

    Here's another error that I picked from the forums today, which I see a fair number of occurrences of, both in the public forums on the web and internally in MS support as well.

    You attempt to enable database mirroring and you receive the following:

    Database Mirroring login attempt failed with error: 'Connection handshake failed. An OS call failed: (c0000413) 0xc0000413(error not found). State 67.'.

    Error: 1443, Severity: 16, State: 2.

    If you translate error 0xc0000413 you come up with the following:

    STATUS_AUTHENTICATION_FIREWALL_FAILED

    This may seem slightly strange, as you may not have a firewall in between these servers (well you might but even if you do this is not the problem, as if your firewall was blocking the traffic you would encounter error 1418 normally). But in truth this error comes from somewhere different. It's extremely likely that if you are experiencing this error you are mostly likely running mirroring with one of the following configurations:

    1. Using domain user logins for the service accounts, but different ones across the mirror partners

    2. Using multiple domains, as per the forums poster that I saw today.

    The chances are that you have not explicitly created logins and granted them CONNECT permissions to the endpoints, as per these 2 articles:

    http://msdn.microsoft.com/en-us/library/ms178029(SQL.90).aspx

    http://msdn.microsoft.com/en-us/library/ms179306(SQL.90).aspx

    If you are running in the same domain, with the same domain account as the SQL Server service account, then this is not required, but if you want to do your authentication in a slightly more complex way, then these are the articles for you.

    Alternatively of course you could run mirroring using certificates, if for some reason the above solution didn't fit in with your security policies.

    http://technet.microsoft.com/en-us/library/ms191477(SQL.90).aspx

    Which also is valid if you want to run your service accounts outside of a domain environment.

    Graham

  • SQL in Stockholm

    Database Mirroring Error 1418 Troubleshooter

    • 3 Comments

    I've been posting in the MSDN and TECHNET mirroring forums in the last few days and I've noticed a lot of occurrences of the 1418 error which I have referred to briefly in this blog before. Therefore I thought I'd post a 1418 specific troubleshooter for reference. The problem we're trying to solve is that you try to start database mirroring and you receive this error:

    Msg 1418, Level 16, State 1, Line 1
    The server network address "
    TCP://myserver.domain.domain.domain.com:5022" can not be reached or does not exist. Check the network address name and reissue the command.

    The following assumes that:

    1. You are using separate physical servers for the mirror partners. If you're not, you might this to read this about configuring mirroring on local instances and a typical problem you can come across.

    2. You're using domain accounts for SQL Server services

    3. You're using windows authentication and you're not using certificates

    4. You've already read this, or are familiar with the overall principal.

    So here are the things that I would check:

    1. Have you restored a fairly recent full backup to the planned mirror server, with NORECOVERY. Have you then restored a transaction log, also with NORECOVERY. If you haven't you won't be going anywhere with mirroring. Maybe you should read this first :-)

    2. Have you configured your endpoints correctly. To be specific, this means:

    a) Are they started

    b) Are they using the same form of authentication

    c) Are they using the same form of encryption

    d) Does the service account have permission to use them

    Even if you think or assume you have done this correctly (including if you've used the wizard to configure them) don't assume they are correct. You've encountered an error and you need to be sure. You need to script each endpoint and check that all the settings are correct. I've seen this happen too many times to mention, where people thought they were OK (because they "ought" to have been) but it was something simple like one of them wasn't started. Using the wizard does not guarantee that everything will always work. Remember all the wizard does is execute T-SQL, so mistakes or problems are still possible.

    3. Are the ports you have selected valid, and are they open? Are they being used by any other processes? Can you telnet to the port?

    4. Is there a firewall (hardware or software)? Have you created an exception / rule to the firewall? Windows firewall is no exception to this rule. It will block mirroring traffic just as much as any physical firewall will.

    5. Is the FQDN valid? Can it be resolved? Again, can you telnet to it? If you're not using FQDN and you're using direct IP addresses, the same rules still apply. Can you telnet to it? If you're not using FQDN or IP, have you checked the above post for resolving names locally.

    I hope this helps. It's my experience that these points will solve the majority of 1418 errors. If you have any questions about this topic feel free to post here or in the forums.

    Graham

  • SQL in Stockholm

    More blogs from the SQL Server Support team in Europe

    • 0 Comments

    I just noticed that my colleague João blogs as well. You can find his blog here:

    http://blogs.msdn.com/joaol/default.aspx

     

  • SQL in Stockholm

    Problems executing maintenance plans in SQL Server 2008

    • 3 Comments

    In the RTM build of SQL Server 2008 , if you install the tools in any edition, you are able to create maintenance plans. However depending upon your version and your extact installation components, when you execute the maintenance plan you created, you may receive the following message:

    Executed as user: <domain>\<user>. Microsoft (R) SQL Server Execute Package Utility  Version 10.0.1600.22 for 32-bit  Copyright (C) Microsoft Corp 1984-2005. All rights reserved.    The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services.  The package execution failed.  The step failed.

    This is caused by the fact that you need to have the full SSIS component installed to execute the package that you have created. Many people will not come across this message as they have just installed the SSIS component by default, but you should consider the following scenario:

    If you are running a lower level edition, such as web or workgroup, then you cannot install SSIS as part of that setup and configuration, as it is not available in that edition. Therefore to be able to run the packages that you have created, you would, as the message says, need to install the component, but from a licensed edition that supports SSIS.

    If you are running Standard or Enterprise edition, you will need to re-run setup and add the SSIS component. This will allow you to run the maintenance plan.

    This behavior is by design as SSIS is only supported in the higher editions of SQL Server. However we understand that for some customers this is not the optimal type of behavior with regard to maintenance plans (this is a good example of Microsoft Connect feedback), and as such it has been logged and is under review by the development team.

Page 2 of 3 (64 items) 123