THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

SQL Server Backup I/O Performance

I had always thought that:

  • SQL Server backup reads/writes sequentially, and
  • SQL Server backup could fully utilize the throughput of the I/O path

But I'm no longer so sure. Recently, I was doing some benchmark work on two I/O paths, and had the following numbers from pure I/O tests with sqlio.exe:

  • Drive E:  ~200MB/sec for both random reads and sequentially reads (block size>= 128K), and
  • Drive M: ~120MB/sec for sequential writes and ~75MB/sec for random writes (block size>=128K)

I had a database with its single data file on drive E, and for testing I backed up the database several times to drive M. These backups caused read I/Os on drive E and write I/Os on drive M. The database backup file was about 16GB. The database was on a 32-bit SQL2005 SP running on 32-bit Win2K3 SP1.

The following numbers--obtained from perfmon and the message from BACKUP DATABASE--show the backup performance behavior on drive E and drive M:

  • Backup I/O block size was 1MB (for reading from E and writing to M)
  • Backup reads were consistently at ~70MB/sec from E and writes consistently at ~70MB/sec to M when writing to a single backup file

Now, 70MB/sec was nowhere near what drive E could do for large I/O reads--sequential or random, and 70MB/sec was nowhere near what drive M could write for large sequential I/Os. Okay, SQL Server might not be able to push the I/O subsystem hard enough with a single device file, and striping the backup across multiple backup devices did help to improve the backup throughput. But even with striping the backup throughput peaked off at about 100MB/sec, which was much better than 70MB/sec but still fell short of 120MB/sec.

In the comment section of this blog post, Tony mentioned the impact of using the BLOCKSIZE option in the BACKUP DATABASE statement. I redid the tests with BLOCKSIZE=65536 as Tony suggested. And since I can't insert a chart in the comment section, I've modified the blog main text to include the chart and my observations here.

In this particular case, setting BLOCKSIZE to 65536 did seem to improve the backup throughput slightly overall except when the number of backup device was one, i.e. when the backup was not written to multiple files. In repeated tests, when the backup was not written to multiple files, setting BLOCKSIZE to 65536 not only did not improve, but significantly degraded the backup throughput (from ~70MBps to ~28MBps). I don't know what was cause.

To see whether the combo of BLOCKSIZE=65536 and a single-device backup was always bad, I tested--on a different server with different storage--backups with BLOCKSIZE=65536 and without explicitly setting the BLOCKSIZE option at all. The following table has the results:

# of Backup Devices


Without setting BLOCKSIZE

1 61 57
8 73 90

The effect of BLOCKSIZE=65536 was clearly quite different in these two test cases. Also note that in all these tests the number of bytes per write was observed to be 1048576 (i.e. 1 MB). I'm not sure whether this option has any bearing on the actual write I/O block size of the BACKUP DATABASE statement.

There was no fragmentation at the file system level because the data file was practically the only file on drive E and there was no file on drive M.

So, it appears that, either, (1) the overhead of SQL Server backup prevented it from reading/writing at the full I/O throughput speed, or (2) SQL Server was not strictly doing sequentially writes, or (3) SQL Server wasn't pushing the I/O subsystem hard enough (e.g. by maintaining adequate I/O queue depth).

Published Friday, March 30, 2007 11:13 PM by Linchi Shea
Filed under: , ,

Attachment(s): Backup1.gif



Tony Rogerson said:

Hi Linchi,

It sounds like you are hitting the problem that I flag up with SAN usage, the fact that the IO system isn't handling small IO sizes - the default BLOCKSIZE the backup database writes is just 512 bytes.

I'd be interested in seeing the BACKUP DATABASE options you used - try setting BLOCKSIZE to 65536, one of my clients uses HP kit with directly attached SAS drives and the default options for BACKUP only writes at 12MBytes per second, but changing the BLOCKSIZE to 65536 increases performance to around 40MBytes per second which is still a lot less than what the kit is capable of (190MBytes per second) (see my entry:

I've not had chance to play with BUFFERCOUNT or MAXTRANSFERSIZE yet.


March 31, 2007 8:29 AM

Linchi Shea said:

Hi Tony;

It didn't occur to me to consider the BLOCKSIZE option. Now that you mentoned it and I read your blog on this, I've repeated the tests to include BLOCKSIZE=65536. The impact of explicitly setting BLOCKSIZE to 65536 seems to be mixed not only on one server for different number of backup devices, but also on different servers/storage systems.

I don't want to claim I really understand what this BACKUP option is really for.

I don't know how to insert a chart in a comment, so see the revised blog post for more details.

March 31, 2007 10:11 PM

Tony Rogerson said:

Hi Linchi,

Cool - good update; I got chance to play with BUFFERCOUNT and MAXTRANSFERSIZE and I got a lot more throughput - I've only tried a single device but using these and coupled with the BLOCKSIZE I got up to 70MBytes per second throughput.

But I agree, where is the rest? This is nowhere near the capability of the write speed of my box; may be it's writing to big a chunk to the disk; perhaps something odd is happening with the Microsoft Tape Format. I wonder if its some internal locking going on - did you notice any waittypes?


April 7, 2007 4:07 AM

Andrew Calvett View Andrew Calvett's profile on LinkedIn said:

I was recently exploring how to increase the backup read throughput on one of our SQL servers. Below

January 24, 2010 9:12 AM

Maurice De Vidts said:

To increase your read throughput you need to have mutilple target devices or files.  It also is related to the number and size of buffers you set (buffercount and maxtransfersize).  Each buffer is maxtransfersize big.  If you can set your maxtransfersize to 4MB and a high buffer count will cause it to read and write with the fewest IO's.  Some SAN arrays are don't like very large sequential writes so you have to experiment.

When you test you backup loop at the perf counters for physical disks and watch the average bytes/write.

March 25, 2010 10:15 PM


dinesh is a good man

August 12, 2013 1:42 AM
New Comments to this post are disabled

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog


Privacy Statement