THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Maria Zakourdaev

Real tortoises keep it slow and steady. How about the backups?


… Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks.
Two days passed and there was no sign of the tortoise.
"You know, she is taking a lot of time", said one of the tortoises.
A little voice from just out side the fence said, "If you are going to talk that way about me I won't go."


Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup?

Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size.

After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area: 

  • LiteSpeed by Quest
  • SQL Backup by Red Gate
  • SQL Safe by Idera

The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005. 
This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course). 
Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price.
In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server.

But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture.

Medium size database.

Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database.

It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer.
Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server.
Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time.

image image

Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so. 

Very large database

As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target.


Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal. 
However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files.

Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration.

As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time.
Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release.

There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks. 

Benchmark server:

  • SQL Server 2008 R2 sp1
  • 2 Quad CPU
  • Database location: NetApp FC 15K Aggregate 53 discs
  • Backup destination volumes: two physical NetApps FC 15K Aggregate 53 discs, 4 files on each volume. 

Backup statements:

No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance).

SQL backup


SQL Backup

SQL safe

backup database <DBNAME> to

disk= '\\<networkpath>\par1.bak' ,

disk= '\\<networkpath>\par2.bak',

disk= '\\<networkpath>\par3.bak'

with format, compression

EXECUTE master.dbo.xp_backup_database

@database = N'<DBName>',

@backupname= N'<DBName> full backup',

@desc = N'Test',


@filename= N'\\<networkpath>\par1.bak',

@filename= N'\\<networkpath>\par2.bak',

@filename= N'\\<networkpath>\par3.bak',
@init = 1

EXECUTE master.dbo.sqlbackup

TO DISK= ''\\<networkpath>\par1.sqb'',

DISK= ''\\<networkpath>\par2.sqb'',

DISK= ''\\<networkpath>\par3.sqb''



EXECUTE master.dbo.xp_ss_backup

@database = 'UCMSDB',

@filename = '\\<networkpath>\par1.bak',

@backuptype = 'Full',

@compressionlevel = 4,

@backupfile = '\\<networkpath>\par2.bak',

@backupfile = '\\<networkpath>\par3.bak'

If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more:

  • RedGate : use THREADPRIORITY option ( values 0 – 6 )
  • LiteSpeed : use  @throttle ( percentage, like 70%)
  • SQL safe :  the only thing I have found was @Threads option.




Published Monday, September 10, 2012 2:54 PM by Maria Zakourdaev
Filed under:

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



TheSQLGuru said:

I believe some/all 3rd party products offer additional capabilities, such as object-level, recovery that could be important to users.

September 10, 2012 10:42 AM

Maria Zakourdaev said:

Yes, of course but, as I have mentioned above, all we were looking for, were faster backup speed and smaller files. 2 of 3 backup tools mentioned above do not include virtual restore/object level recovery option, you need to purchase additional tool.

September 10, 2012 10:59 AM

Paul White said:

Very interesting, Maria - and surprising in places too!  Looking forward to reading more on this.

September 10, 2012 11:21 AM

Anonymous said:

Nice post! Really interesting to see how the vendors get much better compression. It would be even more interesting to see stats on deduping consecutive backups after the various compression algorithms have been applied. I wonder whether Microsoft's would show much better ratios in that case?

September 10, 2012 12:18 PM

John Hammond said:

Good job Maria, why only SQL Safe by Idera and no other 3rd party backup tool for Very large database benchmark?

September 10, 2012 2:18 PM

Ola Hallengren said:

For SQL Server native compression it could also be interesting to test with the BUFFERCOUNT option. Setting this option can in some cases improve the backup performance.

September 11, 2012 2:14 PM

Maria Zakourdaev said:

John, I have used SQL safe for the large database backup tests because it has shown the best speed/compression results for the 300GB database.

September 12, 2012 9:33 AM

Maria Zakourdaev said:

Paul, Adam, thanks for sharing your feedback!

Ola, thanks for the link I will check if BUFFERCOUNT will speed up the backups.

September 12, 2012 9:41 AM

Chris Wood said:

I am not sure if the other products other than Red Gate SQLBackup can do this but if you split the backups you can have filenames that cantain the date and/or some other data but are uniquely named. I cannot do this with the native SQL backup because in the case of a split I need to define backup devices first.


September 14, 2012 4:39 PM

Ola Hallengren said:

You can also use my backup scripts ( to stripe the backup across multiple files, with date and time in the file names. It works with SQL Server native backups, as well as with Quest LiteSpeed, Red Gate SQL Backup and HyperBac, and Idera SQLsafe.

September 15, 2012 6:31 AM

KKline said:

Another option for increasing the speed of any backup, whether a native backup or a 3rd party backup (although I can only speak for Quest LiteSpeed), are striped backups.

With striped backups, you basically create more than one backup devices on independent physical storage, thereby splitting the IO.  So if you went from one backup devices to two, you'd potentially cut the IO to any one device by half, to three and the IO would go down to 1/3rd. And so on.

It's a good way to speed up the entire backup process when time is of the essence.

Good article!


September 16, 2012 8:28 PM

Maria Zakourdaev said:

Thank you Kevin, in this backup benchmark the backup files were actually striped, I have forgotten to mention it. Half of the backup files went to one NetApp rack and second half to the different NetApp. You can see the speed difference between backup to one file and to 4 files ( 2 on each physical storage). It wasn't a half but it definitely speeds up the backup process.

September 18, 2012 10:40 AM

Evgeny Alexandrovich said:

I conducted a similar research an year ago.

Actually comparing the same products but backup time was the main goal - Litespeed was the winner.

BTW they have the scratch disk option.

September 25, 2012 5:35 AM

Maria Zakourdaev said:

Testing Native SQL Server 4TB database backup with different BUFFERCOUNT settings gained 10 more minutes:

BUFFERCOUNT = 50   : 5h 38 min

BUFFERCOUNT = 100  : 5h 30 min

BUFFERCOUNT = 300  : 5h 28 min

BUFFERCOUNT = 400  : 5h 35 min

To gain optimal performance, test different BUFFERCOUNT settings because the best configuration depends on your database files/filegroups settings and output devices. 

September 27, 2012 7:38 AM

Yaniv Etrogi said:

Hi Maria,

Years a go i performed such a test my self but since sql server introduced the compressed backup I do not find any advantage for working with a 3rd party tool and as a matter of fact in several environments I administrate I was glad to switch to sql native backup when the featured was released.

What method have you decided to stick with?

November 20, 2012 8:23 AM

Greg Linwood said:

I'm surprised you didn't include RedGate's Hyperbac which has been the leading 3rd party product for the past few years. Most important with this product is that you don't need to reconfigure any existing backup scripts..

December 26, 2012 3:35 PM

Maria Zakourdaev said:

I have tested Red Gate's SQL Backup and I have presumed that it uses Hyperbac for the backup compression.

December 26, 2012 3:45 PM

Ola Hallengren said:

I checked with Red Gate. SQL Backup and SQL HyperBac are not using the same compression algorithms.

January 4, 2013 4:03 PM

David Sumlin said:

It appears that Red Gate HyperBac products have been discontinued.

March 13, 2013 1:30 AM

Chai said:

You are on a SAN, why not do a SAN-based SQL backup?

April 2, 2013 7:17 AM

Leave a Comment


About Maria Zakourdaev

The shortest word in the English language that contains the letters: abcdef is… feedback! Let me know if I have touched something that is alive in the cosmos.
Privacy Statement