This week I was in a customer measuring the effects it would have, in one particular environment where they were using SQL Server 2008 R2 Enterprise, if they would compress their backups.

To start with, we ran a few tests using BACKUP DATABASE & BACKUP LOG T-SQL statements and controlled various figures along each test run (compression ratio, CPU overhead, and elapsed time).

Results proved extremely beneficial, so we decided to test it using the backup management software their corporation uses as a standard. The problem is that the version of the backup software my customer is using still doesn’t support backup compression clauses (WITH COMPRESSION | NO_COMPRESSION) when used against instances and editions of SQL Server which natively support compression of the backup stream.

With that, we decided to enable compression through the server-wide setting “backup compression default”, and scheduled the backup software to take an exceptional one-time database backup of all databases in this test instance.

To our surprise, we noticed that the backups had not been compressed at all. First suspect was that the backup compression default setting had been switched back to 0 (false/no) before the backup had taken place, but checking the currently configured value revealed it was configured to compress by default, and there was no evidence in ERRROLOG showing that anyone had been switching this setting back and forth in the recent past. So, why wasn’t it working as I expected?

Time to jump into the code and see if there was any special condition under which the default setting wasn’t honored.

Indeed, there were such cases. If you don’t explicitly specify the WITH COMPRESSION | NO_COMPRESSION clause, and the edition of SQL Server were the backup is running supports backup compression, it will use the value set in “backup compression default”, EXCEPT if the backup is being requested via VDI. That means VDI backups are not affected by the default compression setting. Hmmm… And why would that be?

Searching for a reason, I found a post by Sunil and Kevin called Backup compression and Virtual Device Interface (VDI) where they justify the logic in SQL Server’s code that ignores the server-wide default setting.

But in the same piece of code, I found a few handy trace flags, all available in the retail build, which I was about to enable to make this functionality behave the way we needed. The backup software my customer uses to take their SQL Server backups doesn’t really care about the layout of the backup stream to provide any functionality like that explained by Sunil and Kevin.

The one that really suits our need is trace flag 3039, which you must enable globally ( DBCC TRACEON(-1, 3039) ). It allows VDI backups to be affected by the default compression setting just as non-VDI backups do. That, of course, as long as the edition of SQL Server supports backup compression. Else, the server setting will continue to be ignored, despite the presence of the trace flag.

There is also flag 3034, which when enabled globally, overrides server default to always force compression, but it doesn’t affect backups where the no_compression clause has been explicitly used. And neither has effect if the edition does not support compression.


Finally, trace flag 3035, must be enabled globally, if you want to override server default to always avoid compression. It doesn’t affect those sentences where the compression clause has been explicitly given. If both, 3034 and 3035 are enabled, 3035 has preference over 3034.