Backup More Than 1GB per Second Using SQL2008 Backup Compression
Backup Compression in SQL2008 performs very well and it is easy to use. You could either use “With Compression” along with the “Backup” command or simply enable default backup compression through sp_configure by setting ‘backup compression default’ value to 1. It is very convenient for backups of large databases in our customer lab environment.
A few things you could do to make the backup compression perform better. The hardware we used is 32way HP Superdome and StorageWorks XP12000 disk array.
· Test to find the ideal number of backup devices to use
First database we backed up is 1.27TB in size with 12 database files (on 12 LUNs). The second database is 2.95TB in size with 76 database files across 45 LUNs. We tested backup using different number of backup files (8, 16, 26 and 32), and 16 backup devices demonstrate best performance in our environment.
The following table shows database sizes, backup execution times and total size of all backup files. The database is not page or row compressed and the backup devices are local.
Database Size | Exec Time | Backup Size | Avg. MB/sec | # Core |
1.27TB | 20min | 320GB | 1059 | 32 |
2.95TB | 34min33s | 482GB | 1424 | 64 |
· Backup over the network (8 x 1G Ethernet, 2 files per NIC)
- Jumbo Frame helps - we used 9014.
- Set Buffer Count = 512
The backup volumes are a pair of 500GB SATA drives 7200 RPM configured as RAID 1 each. connected via 4 controllers in 16 disk shelfs.
In our test, 1.27TB is backed up over the network in 24.5 minutes (862MB/sec).
Backup operation is IO intensive. Backup compression writes less pages to disk comparing to uncompressed backup, as long as you system is not bottlenecked on CPU, backup compression should executes faster than backup without compression.
Hope you will enjoy this new SQL feature and getting even better backup performance.
Lindsey Allen, Thomas Grohser
Comments
Anonymous
March 02, 2008
Can you post the time for an uncompressed backup, with CPU usage for both? That way we can make a meaningful comparison. ThanksAnonymous
March 02, 2008
The avg CPU% is 48% for 1.2TB backup compressed (took 20min). To backup the same database without compression took 27min, the CPU% was around 30%. The avg CPU is 37% backing up 2.9TB compressed. I didn't try backup without compression on 2.9TB database. Sanjay Mishra and Mike Ruthruff did an extensive study on backup compression, and a whitepaper will be published very soon.Anonymous
March 02, 2008
New post at blogs.msdn.comAnonymous
March 02, 2008
Is restore fast as well? SreekanthAnonymous
March 02, 2008
Yes, restore the compressed backup to 2.9TB took 20minutes.Anonymous
March 03, 2008
The comment has been removedAnonymous
March 03, 2008
Hi Colin, we have a whitepaper coming out soon by Sanjay Mishra and Mike Ruthruff. They used HP DL585 and a midrange disk array closer to what you have in mind.Anonymous
March 04, 2008
I am a little puzzled by the CPU numbers, a couple of years ago, I tested LiteSpeed on the HP rx6600 32 cores could do 2GB/s at 3-4 compression ratio on the TPC-H data set, at full CPU utilization, so your 48% makes sense but a backup w/o compression had almost no CPU, <5%, why was it 30% in your test?Anonymous
March 04, 2008
Today we are configuring for another perf test, so I took advantage of the system idle time to run a uncompressed backup of the 2.9TB database. It took a little over 1 hour` (1 hour and 48 seconds). The backup size=1.88TB. The avg CPU=3.1% (64core). max CPU is 29%.Anonymous
March 25, 2008
Last year we worked with a customer on a proof of concept that involved populating and running some performanceAnonymous
April 23, 2008
Authors: Mike Ruthruff, Sanjay Mishra Contributor: Steve Schmidt Technical Reviewers: Kevin Farlee, Sunil