THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

Dedicated Network Adapter(s) for transferring SQL backups to the tape archival system

One thing that really surprises me is how few people configure dedicated network adapters/ports (and preferably multiple adapters) for copying the SQL Server backup to the server with the tape archival system. The common reason cited by a really naïve system admin is that the percent network utilization as shown in task manager or perfmon never goes above 40-50%. Basically, this is a really worthless counter highly subject to misinterpretation with serious consequences in transaction processing environments. What it comes down to is that the network traffic generated by the backup file copy over the same network used by transaction processing can be highly disruptive even at 20-30% network utilization.


Think about this. On a gigabit Ethernet link, which has been standard for the last few years, a single file copy (or SQL Server backup to a network location) can generate 40-50MB/sec network traffic, assuming good disk system on both the SQL Server and destination and a clean network. Almost everyone is on the default Ethernet frame size of 1500 bytes. This means there can be approximately 30,000 packets per sec sent from SQL Server, and 15,000 packets per sec received. Running perfmon on the SQL Server might show only 4,000-5,000 packets per sec sent. This is because the Windows Operating system presents a very large packet size, which is what the performance counters record, that is broken into 1500 byte packets by the network adapter.


Even though this is not 100% network utilization on Gigabit Ethernet, it is highly disruptive to other activity on the network, especially transaction processing which requires a highly responsive SQL Server.


The Windows default is: for every 2 packets sent, it must wait for an acknowledgement before the next packet is sent. If the network round-trip is 67 micro-sec, then 15K pairs can be transmitted and acknowledged per sec. For a two-transmit, 1 acknowledgement policy, and 1500 bytes per packet, it would require a 30 micro-sec round-trip to achieve the full 100% (or 100MB/sec) network utilization. In the past I tried setting the TCP/IP registry settings for 4-8 window size, but had trouble getting this to behave properly. Large Internet Packet (9000 bytes per frame) does help achieve higher network utilization. It helps to have matching network adapters at both ends, and a gigabit switch that also supports LIP. Configuring LIP might cause issues with the Web/App servers, so I strongly recommend configuring LIP only on the NICs dedicated for backups.


There is no reason today to buy a single port Gigabit NIC with x4 PCI-E slot. So always add dual-port Gigabit NICs. Technically a quad-port NIC would be a good match for a x4 PCI-E slot. Per my storage configuration discussions, every x8 PCI-E slot in a server system should be populated with a disk IO controller (dual x4 SAS or dual port 4Gbit/s FC), and possibly 1 additional controller in a x4 slot. The remaining x4 PCI-E slots should then be populated with dual-port NICs. Now it is not necessary to have each gigabit port be connected to a different dedicated switch. Most gigabit Ethernet switches should have sufficient backplane bandwidth the handle the full saturation traffic of several Gigabit ports. But this should be tested.


It is necessary to assign each port a different network part of IP address. See Thomas Grohser website on this: I will also add that important environments should really be handled by a system/network admin familiar with configuring multiple networks ports on a server.

Published Saturday, November 22, 2008 1:04 AM by jchang

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



jerryhung said:

I have seen setups of 4 NIC's on the cluster server

1 Public

1 Private/Heartbeat (for Cluster)

1 Backup (private just for backup to file server)

1 Spare (failover)

November 24, 2008 11:26 AM

Linchi Shea said:

Where I used to work, there is a dedicated/separate backup network, and every server has a dedicated backup NIC.

Going forward, with 10GigE or even faster network, I'm not sure using a dediacted backup network is a good idea. There is a general trend toward a single fat general-purpose pipe instead of multiple small special-purpose pipes.

November 27, 2008 11:39 PM

jchang said:

I am not a big fan of putting everthing on a single network, even if its 10GbE. Its not a matter bandwidth, but rather if the large file transfer disrupts the responsiveness of transactions.

there if the QoS capability, but I don't trust these features. With 1GbE, adapters and switches are so cheap that its silly not to make use of multiple channels. Currently 10GbE is not cheap, especially on the switch. If its important, I am inclined to still have a dedicated port for heavy non-transaction activity.

November 30, 2008 11:24 AM

Linchi Shea said:

The considerations you cited are tactic, but necessary at a given stage of technology development. Very generally speaking, that approach gives servers special and hardwired configurations, making them difficult to manage and scale to large numbers. But in a data center nirvana, servers should be extremely simple and homogeneous, consisting only of processors, memory, and a fat pipe for both network I/Os and storage I/Os. Servers may be classified into a very limited number of categories for their processing capacities. Otherwise, they should be completely indistinguishable from each other. They are truly just computing nodes with no personality whatsoever. They are connected throughout the data center with a single unified very fat and low latency fabric. So if a server goes down, there is no need for troubleshooting, just switch the app (whatever it is) to run on a different computing node, toss out the failed server, and put in another one. This can truly automate a data center and take human beings out of the data center picture, getting rid of all this manual server build nonsense.

December 3, 2008 9:46 PM

jchang said:

I am all for standardization. There should probably one or two for most uses (built around 1 and 2 socket systems/blades) and 1 for the critical database, which should have specialized IO capabilities. Most data center exist for a purpose, be it commercial, government or other. The standard configurations should serve the needs of the driving purpose, not that data center nirvana dreamer.

To have IT mandate a certain standard configuration because even a nitwit can configure it, amounts to saying that when we reach the capacity that can be handled by this nitwit configuration, we will stop accepting additional volume, because it would require some one who not a nitwit to configure it. Hence, our company is staffed with nitwits.

December 7, 2008 8:05 AM

tankjones said:

This is a great idea.  It may sound stupid but how do you tell SQL Server to send the backup using a different NIC?  Is there an option on the backup command that I am missing?

December 8, 2008 4:18 PM

jchang said:

see: Thomas Grohser website on this:

suppose the current IP and DNS entries (all with mask are: TapeServer DBserver

the alternate network port could be TapeServer2 DBserver2 (DNS entry optional)

technically, a second port on the TapeServer is not required, you could just add a second IP on current NIC using the 10.10.11 net.

the backup command would then point to \\TapeServer2\path or \\\path

December 9, 2008 1:03 PM

Leave a Comment


About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog


Privacy Statement