Backup Compression in SQL2008 performs very well and it is easy to use. You could either use “With Compression” along with the “Backup” command or simply enable default backup compression through sp_configure by setting ‘backup compression default’ value to 1. It is very convenient for backups of large databases in our customer lab environment.
A few things you could do to make the backup compression perform better. The hardware we used is 32way HP Superdome and StorageWorks XP12000 disk array.
· Test to find the ideal number of backup devices to use
First database we backed up is 1.27TB in size with 12 database files (on 12 LUNs). The second database is 2.95TB in size with 76 database files across 45 LUNs. We tested backup using different number of backup files (8, 16, 26 and 32), and 16 backup devices demonstrate best performance in our environment.
The following table shows database sizes, backup execution times and total size of all backup files. The database is not page or row compressed and the backup devices are local.
· Backup over the network (8 x 1G Ethernet, 2 files per NIC)
The backup volumes are a pair of 500GB SATA drives 7200 RPM configured as RAID 1 each. connected via 4 controllers in 16 disk shelfs.
In our test, 1.27TB is backed up over the network in 24.5 minutes (862MB/sec).
Backup operation is IO intensive. Backup compression writes less pages to disk comparing to uncompressed backup, as long as you system is not bottlenecked on CPU, backup compression should executes faster than backup without compression.
Hope you will enjoy this new SQL feature and getting even better backup performance.
Lindsey Allen, Thomas Grohser
Can you post the time for an uncompressed backup, with CPU usage for both? That way we can make a meaningful comparison. Thanks
The avg CPU% is 48% for 1.2TB backup compressed (took 20min). To backup the same database without compression took 27min, the CPU% was around 30%. The avg CPU is 37% backing up 2.9TB compressed. I didn't try backup without compression on 2.9TB database. Sanjay Mishra and Mike Ruthruff did an extensive study on backup compression, and a whitepaper will be published very soon.
New post at blogs.msdn.com
Is restore fast as well?
Yes, restore the compressed backup to 2.9TB took 20minutes.
I'm impressed, naturally, but can we have some figures for mere mortals please? My average box is likely to be 4 sockets with either dual or quad cores.It sometimes takes real struggles to get 1gb ethernet let alone 10gb. How about a reasonable number of cores over 4gb fibre to a sensible array, say 14 x 15k spindles in a raid 10 .. that would give me a much better reference point.
That doesn't mean I don't think wow! and well done, it's just I'm unlikely to run into that type of hardware.
Hi Colin, we have a whitepaper coming out soon by Sanjay Mishra and Mike Ruthruff. They used HP DL585 and a midrange disk array closer to what you have in mind.
I am a little puzzled by the CPU numbers,
a couple of years ago, I tested LiteSpeed on the HP rx6600
32 cores could do 2GB/s at 3-4 compression ratio on the TPC-H data set, at full CPU utilization,
so your 48% makes sense
but a backup w/o compression had almost no CPU, <5%,
why was it 30% in your test?
Today we are configuring for another perf test, so I took advantage of the system idle time to run a uncompressed backup of the 2.9TB database. It took a little over 1 hour` (1 hour and 48 seconds). The backup size=1.88TB. The avg CPU=3.1% (64core). max CPU is 29%.
Last year we worked with a customer on a proof of concept that involved populating and running some performance
Authors: Mike Ruthruff, Sanjay Mishra Contributor: Steve Schmidt Technical Reviewers: Kevin Farlee, Sunil