New update on minimal logging for SQL Server 2008

New update on minimal logging for SQL Server 2008

Rate This
  • Comments 8

Based on the customer feedback, we have decided to make minimal logging functionality into a btree available to SQL 2008 customers. All you need to do is to enable TF-610 in RTM bits. As a cautionary note, we have seen some slowdown if you are loading data into a btrree using TF-610 on a slower IO subsystem. So please make sure you test it first. 

Here are the two performance issues you need to be aware of. I will walk you through each of these with examples

1.     Double writes

2.     Log space reservation.

Example:

-- create the source table. This table stores 2 row per page

create table t_source (c1 int, c2 int, c3 char (3000))

go

 

-- I insert 10000 rows. With 2 rows/page, I will have 5000 pages so

-- the size of the table is 40MB

declare @i int

select @i = 1

while (@i < 10000)

begin

      insert into t_source values (@i, @i+10000, 'hello')

      select @i= @i + 1

end

 

 

-- create target table with clusttered index

create table t_ci (c1 int,  c2 int, C3 CHAR(3000))

go

 

create clustered index ci on t_ci(c1)

go

 

 

-- I will now insert all the rows from the source table into the target

-- table and measure (a) logspace used (b)logspace reserved (c) number of writes

-- both with TF-610 ON and OFF.

 

 

-- Here is the INSERT statement

 

begin tran

insert into t_ci select * from t_source order by c1

rollback

 

 

 

# of page writes(appx)

Logspace used (appx)

Logspace reserved (appx)

TF-610 OFF

5000

34MB

8MB

TF-610 ON

10000

3.5MB

82MB

 

What is interesting to note, is that without TF-610, we write the data page only once and it is fully logged. While with TF-610, we write the data page twice (i.e. double write) but we do get minimal logging (i.e. we took wrote only 3.5MB log). However, the log reserved space balloons to 82MB. This is done to allow for rollback of the transaction and this space is reserved assuming full-logging. You may wonder that it should atmost be 34MB, but it is because of conservative log reservation algorithm.

For details, please refer to my blog entries

http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-3.aspx

http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx

 thanks

Leave a Comment
  • Please add 5 and 5 and type the answer here:
  • Post
  • Guys, Check this update on minimal logging into a btree http://blogs.msdn.com/sqlserverstorageengine

  • New information about the Minimal Logging feature available with SQL Server 2008 using the Trace Flag

  • Could you please share the knowledge how to know the "Logspace reserved" and "page writes"

  • I used the folowing command…I was running only one tran…but you can filter

    select database_transaction_log_bytes_used, database_transaction_log_bytes_reserved

    from sys.dm_tran_database_transactions where database_id = DB_ID()

    Here is the data (without TF-610)

    database_transaction_log_bytes_used database_transaction_log_bytes_reserved

    ----------------------------------- ---------------------------------------

    34270248                            8098764

    (with TF-610). You can see that the log reserved is pretty high

    database_transaction_log_bytes_used database_transaction_log_bytes_reserved

    ----------------------------------- ---------------------------------------

    3570116                             82079158

  • Hi

    This based on this article msdn.microsoft.com/.../dd425070(SQL.100).aspx

    I am having a issue and was wondering if you had any thought

    General information

    Trace Flag 610 in on, the data base is in Simply Recover, and Server is 2008 sp1

    What I am doing

    I have a table w/clustered index and it empty I do my first batch of insert into table and minimum logging works and data look good. I run my second batch of insert and minimum logging does not seem to work. Just so we are clear the cluster index we are using is very simple for this test four values A,B,C,D 10 million rows each and we insert the data in the order of the cluster 10m ‘A’ then 10m ‘B’ and so on.

    Here is a sample on the insert any thoughts would helpful

    Thanks

    Scott

    CREATE TABLE OutPutTable

    (

    IDRow int NULL

    ,ColInt int NULL

    ,ExpRow Char(1) NULL

    ,ColVarchar varchar(20) NULL

    ,Colchar char(2) NULL

    ,ColCSV varchar(80) NULL

    ,ColMoney money NULL

    ,ColNumeric numeric(16,4) NULL

    ,ColDate datetime NULL

    ,AutoId int IDENTITY(1,1) NOT NULL

    )

    CREATE CLUSTERED INDEX Clust_IDX ON OutPutTable (ExpRow)WITH (FillFactor = 100)

    GO

    DBCC TRACEON (610)

    Go

    --First Batch

    INSERT INTO OutPutTable WITH(Tablockx)

    (

    IDRow

    ,ColInt

    ,ExpRow

    ,ColVarchar

    ,Colchar

    ,ColCSV

    ,ColMoney

    ,ColNumeric

    ,ColDate

    )

    SELECT

    IDRow

    ,ColInt

    ,ExpRow

    ,ColVarchar

    ,Colchar

    ,ColCSV

    ,ColMoney

    ,ColNumeric

    ,ColDate

    FROM

    SAMPLEDATA

    WHERE

    ExpRow = 'A'

    GO

    DBCC TRACEOFF (610)

    GO

    DBCC TRACEON (610)

    Go

    --Second Batch

    INSERT INTO OutPutTable WITH(Tablockx)

    (

    IDRow

    ,ColInt

    ,ExpRow

    ,ColVarchar

    ,Colchar

    ,ColCSV

    ,ColMoney

    ,ColNumeric

    ,ColDate

    )

    SELECT

    IDRow

    ,ColInt

    ,ExpRow

    ,ColVarchar

    ,Colchar

    ,ColCSV

    ,ColMoney

    ,ColNumeric

    ,ColDate

    FROM

    SAMPLEDATA

    WHERE

    ExpRow = 'B'

    GO

    DBCC TRACEOFF (610)

    GO

  • Minimal logging happens when optimizer chooses a SORT before inserting rows into the target index. Try the following

    (1) look at the query plan

    (2) increase the batch size to 20000 rows and look at the query plan

    thanks

    Sunil

  • I also did a test, even if the plan SORT before the sort, it doesn't work

  • ==>(2) increase the batch size to 20000 rows and look at the query plan

    how to set the batch size to 20000? the sql is like

    insert into tab select * from tab_source, I tried to use "set batchsize 20000", but failed

Page 1 of 1 (8 items)