At the beginning everyone is trying to tune the database by creating additional database files, spreading the data across file groups, adding additional log files for the database…
But wait a minute, is there a performance benefit from having more than one log file in the database ?
Not really. This is due to the fact that SQL Server writes synchronously and serially to the log(s) and only uses one log file at a time. So better than having more than log file is to have the one you have sized appropriately. This is due to the fact that in common log file growing is bad. It will increase the number of administration called VLF (Virtual log files).
There is an excellent post about that on Kimberly´s blog (and way better that I can explain that), so I don´t want to bother you with that here. (http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx)
So back to the original problem which was brought up by Shankar Ramakrishnan on our internal alias list is that you cannot remove a log file which is currently in use. You can see that if you correlate the information from DBCC LOGINFO(‘YourDatabase’) and the sys.database_files view. It will show something like the following:
There are two physical files (2 and 4) and 4 VLFs per file. One VLF from file 2 is currently used as you can see in the status column. In that case the fileid 4 can be removed without any problems as it is not currently used nor has transactions to be backup pending. (In fact as you can see in the FSeqNo, this is a new file and was never used for log activity)
If you want to remove e.g. the file 2 (if it is not the primary log file) you see that you cannot do that at the moment as it is used by SQL Server with an active portion of the log. You will get the following error message:
Msg 5042, Level 16, State 2, Line 6 The file ‘additional_log' cannot be removed because it is not empty.
What you basically will have to do in order to get the file delete is to push the log a bit further to fill all VLFs of the specific fileid and jump over to the new file (id).
A bit tedious, right ? You can either wait till the log is filled up naturally with common transaction or do that on you own. The process would be
Tedious again ? Yes, therefore I wrote a small procedure doing this for you. Make sure that you have backups in place before you execute this thing. The reason is that a backup for the log to a NULL device us happening. If normal activity is going on your database, you might not be able to do a Point In Time recovery during that period.
If you have any questions, feel free to contact me.