First, especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout behind each RAID Group or LUN. Forget capacity, pay attention to the number of spindles supporting each RAID group. While there is nothing wrong with making 2 or more LUNs from the same set of disks, I prefer to make 1 big LUN from the RAID Group, then create multiple partitions from Windows, just to be absolutely clear what storage space shares spindles.
If the RAID group is shared with other applications, make sure that the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really powerful SAN that this was not important. It was and he did not.
The general theory is to aggregate random IO loads into a common pool of disks, and to provide dedicated physical drives for each significant sequential load. For some reason, many people incorrectly interpret this as placing the log files for all databases into a common drive.
Each database log for a well design application, in theory, generates pure sequential IO, and some serious technical heavy weights say that a single pair of physical disks in RAID 1 can handle almost any SQL load (barring very limited exceptions). Well this is not true if you do transaction log backups, have transaction roll-backs, replication and database mirroring. So one might do some additional calculations. If one places more than one log file on a common physical disk set, then even though the IO to an individual file might be sequential, the complete set of IO to the disk is not.
So does each log file get its own set of physical disks? If there are 50 databases in the instance, does this server need 50 pairs of disks in RAID 1 for logs?
OK, let go back to the original theory. A single 15K disk can do approximately 200 small block random IOPS for data spread across the entire disk. It can over 4000-5000 pure sequential IOPS in SQL Server generated log writes. For each pair of physical disks you take away from the common pool, then that is 200 random (write) IOPS that has been lost (assuming RAID 10). If a database does not generate more than 200 log writes/sec and does not require sub-ms log write latency, it is better to leave this log in the common pool.
On the matter of tempdb, I don’t know why people keep suggesting separate disks. Where does the underlying theory come from? Vapor?
Both data and temp should be kind-of random loads. So the common pool is usually better. If the load on data and temp were temporally steady, for example, 10K IOPS to data and 5K to temp, then you could argue to allocate 70 disks for data and 35 for temp.
But this is not how SQL Server behaves. Try running a query that aggregates a very large number of rows, and involves a join. To be specific, the execution plan should show a hash operation, whether for the join or for the aggregate. Check the hash operation cost details. The IO cost component should be non-zero. Otherwise your test query is too small to actually use tempdb (check with the SQL engine team to be sure on this).
Now watch data and temp IO over time. (that is not based on the difference before and after). The disk IO temporal pattern is read from data, write to temp, read from data, write to temp etc, then may at the end, read from temp (but sometime not, think about what this means). So if you had split your disks 50-50 data-temp (excluding OS and log disks), then at any given time, 50% of your disks are busy, the other 50% are idle. Is this the best use of your (gawd awful expensive) resources (if you bought a SAN)?
There are special considerations if certain data files depend on low latency response, and others benefit from high-queue depth operation, but hey, if life were simple, would your company need skilled people?
If the SAN vendor tells you its ok to have one common pool for everything, then that person is a liar or incompetent, and freqeuntly both.
More on storage at: