Blogs from Suhas

Dirtying my hands in SQL Server

Shrinking the Transaction Log files in SQL Server

Shrinking the Transaction Log files in SQL Server

Rate This
  • Comments 5

Hi Friends!

This blog post is mainly to address concerns about increasing log file space, and how to go about to shrink the file. I am adding a priliminary content here as of now, but I will update this content as and when I get time and new content to add.


Question: What is a log file, and why is it required?

Answer: A log file is a separate file, that stores the various transactions hitting the database. Each database has its own log file (one or more) and this (these) log file(s) are required to keep a history of the transactions that hit the database, so that the database can be recovered to a state of both physical and logical consistency in case of failure.

When a transaction is fired against the database, the transaction is first recorded in the log file, and then actual data pages, that need to be modified, are brought into memory. Later, these data pages in memory are changed. At certain intervals of time, a background process, known as a CHECKPOINT runs, which flushes the dirty pages (modified pages in memory) to the disk.

 

Question: How is a log file accessed?

Answer: A log file is always accessed sequentially. So as to be able to properly utilize log space, a log file is divided into logical segments called VLFs or Virtual Log Files. Each log file is divided into multiple VLFs. Now, when there is a new database, the log file,  for all practical purposes is empty, and so are all VLFs. Writing starts from the first VLF, and when this VLF is full, writing continues to the second, and so on and so forth until the last VLF is reached. Now, when the last VLF is full, before growing the log file, the log writer determines if any of the preceding VLFs have been truncated (marked as inactive). In case it finds any such VLF, writing starts in the first inactive VLF in the chain.

For example, if a log file has 10 VLFs, then after the 10th VLF is full, the log writer will search for any inactive VLF in the chain. Let us assume that it finds VLF 3, 5, 6, 7 as inactive, then it will start writing into VLF 3.

In case no inactive VLF is found, the log writer will trigger a log file growth, thereby creating more VLFs at the expense of disk space.

 

Question: VLF truncated?

Answer: Most people assume truncation to be analogous to shrinking the file, however, in SQL Server terminology, truncation and shrinking is not the same. Truncation merely means marking a VLF as inactive, where as shrinking log file means to reduce the number of VLFs in the log file so as to reclaim disk space. There are different conditions when a VLF is truncated.

If a database is in simple recovery model, a VLF will be truncated only when all the conditions below are fulfilled:

  • All transactions that have begun in the specific VLF has been either committed or rolled back.
  • All transactions, that have at least one command written into the specified VLF have either been committed or rolled back. Assuming that a transaction has started in VLF 5 and has continued to VLF 7 through VLF 6, it has to be either committed or rolled back before VLF 6 can be truncated.
  • A checkpoint has run on the database.
  • All transactions, that have at least one command written into the specified VLF have been replicated.

If a database is in bulk-logged recovery model or full recovery model, a VLF will be truncated only when all the conditions below are fulfilled:

  • All transactions that have begun in the specific VLF has been either committed or rolled back.
  • All transactions, that have at least one command written into the specified VLF have either been committed or rolled back. Assuming that a transaction has started in VLF 5 and has continued to VLF 7 through VLF 6, it has to be either committed or rolled back before VLF 6 can be truncated.
  • A checkpoint has run on the database.
  • The VLF in question has been backed up by taking a log backup after the checkpoint has run.
  • All transactions, that have at least one command written into the specified VLF have been replicated.


Question: How does shrinking of a log file happen?

Answer: A log file is shrunk when you issue the following command to the SQL Server:

            DBCC SHRINKFILE ('logical file name', targetsize)

When this command is issued, the log file is shrunk from the end till the last used VLF. Hence, if the last VLF is in use, the log file cannot be shrunk.


Question: How do I know if a log file can be shrunk?

Answer: To understand whether the log file can be shrunk, you will need to fire the following commands and understand their outputs. The first command that needs to be fired is:

            DBCC SQLPERF(logspace)

This will let us know what percentage of the log file is actually in use. The lower the percentage, the more the file can be shrunk.

Now, run the following command:

            DBCC LOGINFO(DatabaseName)

The output of this command will give you the status of each VLF in the log file. A status of 0 means that the VLF has already been truncated and can be overwritten. A status of 2 means that the VLF has active transactions and cannot be overwritten. Look for the last VLF, if the last VLF is 2, then file cannot be shrunk just yet. However, if it is 0, then that means that the file can be shrunk till the last VLF with a status of 2.


Question: Ok, so how do I proceed in case the last VLF has a status of 2?

Answer: First you will need to check the recovery model of the database. The general tab in the database properties dialog box will show the recovery model of the database. If the database is in simple recovery model, then open a new query window, change the database under context to the user database whose log needs to be truncated, and run the command: CHECKPOINT. See if that helps in truncating the last VLF; if not, you can consider trying again after an interval of time. Also, if the database is involved in transactional replication, run the LogReader Agent and keep it running for some amount of time, so that the commands can be replicated.

However, if the database is in bulked-logged or full recovery model, apart from running the CHECKPOINT command, take a log backup. See if that helps in truncating the last VLF, if not, probably you may have to wait for some time before trying again. Also, if the database is involved in transactional replication, run the LogReader Agent and keep it running for some amount of time, so that the commands can be replicated.

 

Disclaimer: All information provided here is my personal opinion and is neither verified nor approved by Microsoft before it is published. All information, and code samples, if any, is provided "AS IS" with no warranties and confers no rights.

Comments
  • PingBack from http://www.anith.com/?p=19333

  • "So as to be able to properly utilize log space, a log file is divided into logical segments called VLFs or Virtual Log Files"

    A question from someone who is not really qualified to ask it...

    Could you explain why VLFs are beneficial?  Why not allow the logical log to rotate through the whole physical log with a pointer to the next free space, rather than divide the physical log into large chunks?  Log backup would record the last LSN backed up, and you could use that and the active log pointers to figure out what is "inactive" and can be reused for new log records.

    Also, I have read in the documentation (Transaction Log Physical Architecture) that truncation of the log marks any VLFs as inactive "whose [log] records all appear in front of the ... MinLSN".  Shouldn't that continue on and say "or after the last-written log record"?  Or does "in front of" include the tail-end VLFs?  Please excuse me, I am having a hard time "wrapping my mind around" the logical log.

    Why doesn't an helpful post like this receive more comments?  I have not seen any more helpful description of shrinking a transaction log.  Thanks!

  • Hi David,

    Thanks a lot for your comments. Really appreciate it.

    To answer your first question, VLFs have been introduced to reduce the overall cost of finding the next free block. If we make the VLFs small enough to be able to accommodate just one Log Record (or eradicate the concept of VLF all together), we will have to examine each log record to check if it is free and can be over-written. This will increase the overall cost of finding the next free record.

    Consider the following scenario:

    My log file is small and can hold 10 records. I have the following:

    Record 1 => Tran 1 (Start)

    Record 2 => Tran 1 (Insert)

    Record 3 => Tran 2 (Start)

    Record 4 => Tran 1 (Insert)

    Record 5 => Tran 2 (Delete)

    Record 6 => Tran 2 (Delete)

    Record 7 => Tran 1(Update)

    Record 8 => Tran 2 (Commit)

    Now, when I start Tran 3, Record 9 will be utilized [Record 9 => Tran 3 (Start)] and the next record (say, insert for Tran 3) will have to go to Record 10 [Record 10 => Tran 3 (Insert)].  Where will the next record go? To find that out, we have to scan the Log File from the beginning to find the next free record. As you can understand, each time I hit the end of the file, I have to start scanning from the beginning. It’s easy when the log file is small, but when the log file is huge, this can create severe performance issues.

    Grouping the records into logical segments (VLFs) can reduce the overall cost of finding the next free block. The larger the logical segment, the less number of scans we have to do, and hence the lower the cost. However, at the same time, the larger the size of the segment, the more space we waste. So, we have to strike a balance between the cost and space wasted. Hence, we came up with the concept of VLFs.

    Your second question is a bit tricky and will need a lengthy explanation. If it’s ok for you, I will prefer writing a whole blog post on this topic. I will have some time during this week-end, so, by Monday, you should find a detailed post on this.

    Thanks,

    Suhas.

  • Thanks for a great write up on the log; we have a very large logfile (14GB or so), for a 40Gb database. And per above ( sqlperf) it's less than 1% utilized! So, I can't wait to shrink it; but, I'm a little afraid; how many database resources does shrinking use? Is it quick, or intensive? Should I do it after hours? Will it take seconds, or minutes, or hours?

    Thanks!

  • Hi Chris,

    Thanks for the comments.

    Shrinking a Transaction Log file generally does not need any resources and is very quick (generally does not take more than a few seconds). Please check DBCC LOGINFO(DatabaseName) to know how much of the log can be shrunk.

    Also, DBCC SHRINKFILE can be blocked by other users / applications trying to access the database. Make sure that there are no users or applications connected to the database before executing the command.

    Hope this helps.

    Thanks

    Suhas

Page 1 of 1 (5 items)
Leave a Comment
  • Please add 7 and 7 and type the answer here:
  • Post