Too many DBAs tend to view a drive presented from a Storage Area Network (SAN) as something of a monolithic nature. They look at the drive as if it had some intrinsic performance characteristics. This view doesn't help one appreciate the true performance characteristics of such a drive.
A more constructive view is to look at the drive as an I/O path that is made up of layers of hardware and software components, many of which can have a huge impact on the performance of the drive. One such component is the fibre channel Host Bus Adapter (HBA), which typically plugs into a PCI-X or PCI-Express slot on the server. An HBA can be simply viewed as a host-side controller for SAN. The performance of the drive can be affected by many HBA-related design choices such as the following:
- The model and make of the HBA,
- The theoretical throughput of the HBA,
- The number of HBAs and how/whether they are load balanced,
- The HBA driver, and
- The configurations of the HBA driver
In this blog post, I'll look at one specific HBA driver parameter setting called QueueDepth--the maximum number of outstanding I/O requests that can be queued at the host bus adapter. Rather than discuss this in abstract, let me look at a specific HBA--Emulex LightPulse LP10000 2 Gigabit PCI Fibre Channel Adapter. This is not the fastest HBA on the market today, but has been a rather popular one. This HBA uses a Windows Storport Miniport driver (version 5-1.11A0) whose QueueDepth setting can be configured.
The question is: How does this QueueDepth HBA driver parameter affect the performance of a drive presented through the HBA?
To demonstrate the performance impact of the QueueDepth driver parameter, I ran a series of 8K random read I/O tests on the drive. For the tests, the QueueDepth parameter was set to the following values (at the host level, thus changing the QueueDepth for the two HBAs that were dynamically load balanced on the server):
- QueueDepth = 8,
- QueueDepth = 16,
- QueueDepth = 32,
- QueueDepth = 64, and
- QueueDepth = 128
In the tests, the I/O load level was controlled with a single thread that maintained a varying number of outstanding I/Os, i.e. workload queue depth. The I/Os per second obtained on the drive were taken as the key performance measure. The following chart shows the performance behavior of the drive with the driver parameter QueueDepth set to the above-mentioned values:
It is clear from the chart that the setting of the HBA QueueDepth parameter can have a huge performance impact on the drive, with everything else remaining the same. In particular, if you set this parameter too low, you can seriously reduce the throughput of the drive.
In general, it is recommended that this parameter be left at its vendor suggested default, which is 32 for this Emulex card. The test results appear to support the recommendation.
The key point however is that a drive presented from a SAN is far from being a monolithic entity and shouldn't be viewed as such. One needs to be aware of all the key components on its I/O path to truly appreciate its behavior.