One thing that really surprises me is how few people configure dedicated network adapters/ports (and preferably multiple adapters) for copying the SQL Server backup to the server with the tape archival system. The common reason cited by a really naïve system admin is that the percent network utilization as shown in task manager or perfmon never goes above 40-50%. Basically, this is a really worthless counter highly subject to misinterpretation with serious consequences in transaction processing environments. What it comes down to is that the network traffic generated by the backup file copy over the same network used by transaction processing can be highly disruptive even at 20-30% network utilization.
Think about this. On a gigabit Ethernet link, which has been standard for the last few years, a single file copy (or SQL Server backup to a network location) can generate 40-50MB/sec network traffic, assuming good disk system on both the SQL Server and destination and a clean network. Almost everyone is on the default Ethernet frame size of 1500 bytes. This means there can be approximately 30,000 packets per sec sent from SQL Server, and 15,000 packets per sec received. Running perfmon on the SQL Server might show only 4,000-5,000 packets per sec sent. This is because the Windows Operating system presents a very large packet size, which is what the performance counters record, that is broken into 1500 byte packets by the network adapter.
Even though this is not 100% network utilization on Gigabit Ethernet, it is highly disruptive to other activity on the network, especially transaction processing which requires a highly responsive SQL Server.
The Windows default is: for every 2 packets sent, it must wait for an acknowledgement before the next packet is sent. If the network round-trip is 67 micro-sec, then 15K pairs can be transmitted and acknowledged per sec. For a two-transmit, 1 acknowledgement policy, and 1500 bytes per packet, it would require a 30 micro-sec round-trip to achieve the full 100% (or 100MB/sec) network utilization. In the past I tried setting the TCP/IP registry settings for 4-8 window size, but had trouble getting this to behave properly. Large Internet Packet (9000 bytes per frame) does help achieve higher network utilization. It helps to have matching network adapters at both ends, and a gigabit switch that also supports LIP. Configuring LIP might cause issues with the Web/App servers, so I strongly recommend configuring LIP only on the NICs dedicated for backups.
There is no reason today to buy a single port Gigabit NIC with x4 PCI-E slot. So always add dual-port Gigabit NICs. Technically a quad-port NIC would be a good match for a x4 PCI-E slot. Per my storage configuration discussions, every x8 PCI-E slot in a server system should be populated with a disk IO controller (dual x4 SAS or dual port 4Gbit/s FC), and possibly 1 additional controller in a x4 slot. The remaining x4 PCI-E slots should then be populated with dual-port NICs. Now it is not necessary to have each gigabit port be connected to a different dedicated switch. Most gigabit Ethernet switches should have sufficient backplane bandwidth the handle the full saturation traffic of several Gigabit ports. But this should be tested.
It is necessary to assign each port a different network part of IP address. See Thomas Grohser website on this: http://www.sqlserver-hwguide.com/ I will also add that important environments should really be handled by a system/network admin familiar with configuring multiple networks ports on a server.