If you’re like me, you like GUIs even for SQL Server, they make it easier to do things quickly without having to remember a lot of syntax.  But GUIs within Management Studio ultimately have to fire off TSQL to perform the changes.  And not knowing exactly what it is generating, short of running a profiler trace can be a little un-nerving when making changes on large mission-critical databases, especially when the GUI appears to hang.

Based on this, I’ve decided never to implement any important changes through the GUI.  That is probably common sense to most DBAs.  However, I find the GUI very useful for generating the SQL.  On virtually all of the dialogues within SQL Management Studio 2008, there is the little script button in the left-hand corner.  So, I typically use this to get my SQL and then just cancel the changes.

This gives you not only the advantage of being able to see what TSQL will actually execute when you hit the OK button, but the ability to save the script for later use, and more importantly gives you control over the execution, so you can cancel it rather than having the GUI timeout after a while on a long command or worse simply hang for ever with no recourse other than a forced exit from management studio.

I have found that in many cases, the SQL that is generated may not actually be what you thought you were going to get when you performed the function via the UI.  An example is changing auto-grow.  I am loading several million rows of data from an Intraday service this weekend into a new database and forgot to set the autogrow.  After trying a couple of times to do this via the UI only to be terminated by nasty messages about lock timeouts.  I could understand some contention due probably to the fact that about 6,500 rows per second are being inserted/updated as we speak (These are fully-logged upserts on an indexed/non-partitioned-yet partitioned table, not bulk-inserts - Did I tell you that I love SSDs…), but why would changing the autogrowth increment run into contention? 

Based on this, I decided to take a look at what TSQL was being generated:

Alas, here is the code generated just by changing JUST AUTOGROW increment and NOTHING else:

USE [TP_Public]
GO
DBCC SHRINKFILE (N'TP_Pub_IDFG03' , 4500)
GO
USE [TP_Public]
GO
DBCC SHRINKFILE (N'TP_Pub_IDFG04' , 4500)
GO
USE [master]
GO
ALTER DATABASE [TP_Public] MODIFY FILE ( NAME = N'TP_Pub_IDFG03', FILEGROWTH = 1536000KB )
GO
ALTER DATABASE [TP_Public] MODIFY FILE ( NAME = N'TP_Pub_IDFG04', FILEGROWTH = 1536000KB )
GO

Oops, where did that DBCC SHRINKFILE come from?

I just set AUTOGROWTH higher.  I wasn’t trying to shrink my database files…

Apparently the dialogue wants to lock in the size of the file that it is currently at and then do the AUTOGROW, maybe this is to make sure that if you specify a percent increase instead of a size increase, that it is accurate.  I don’t know what the engineers were thinking, but that would explain the lock timeout.

So, I just took off the DBCC SHRINKFILEs and ran just the ALTER DATABASE command and no worries, the auto growth increment was fixed quickly without impacting the load in process.