Regardless of the DBMS make or model, the transaction throughput curve of a database system is often shaped like a trapezoid. As the load level goes up initially, so does the transaction throughput. As the load level continues to go up, the transaction throughput will reach a plateau. After the load level is cranked up still further, the transaction throughput will inevitably start to drop and may eventually come to a halt.
With respect to the throughput drop under severe load, two questions are interesting: (1) what is the cause of the drop? and (2) what can we do to delay the drop until an even higher load level?
The answers are of course dependent on the nature of the workload, the system configurations, and a host of other factors.
So rather than trying to address the questions in general, let's look at one specific type of tests I often perform, which is to stress the processors, memory, and their interconnect of a server with a SQL Server read-only workload that effectively caches all the data pages in memory. In other words, this workload takes the disk I/Os out of the equation.
As expected and with no exception, I would see the transaction throughput start to drop--sometimes rather dramatically--as I increase the load level beyond saturation. When this may happen often correlates nicely with a dramatic increase in waits on Thread-safe memory objects waits under SQLServer:Wait Statistics. Unfortunately, this wait counter is documented with a rather useless tautology in that the explanation more or less repeats what the name already suggests without adding useful information. (BTW, the explanation is "Statistics for processes waiting on thread-safe memory allocators"). So, from looking at the wait stats, I can't really tell what wait or waits are killing the transaction throughput.
The point I want to illustrate, however, is the dramatic impact of a change to the workload--a change that merely gives the stored procedures different names.
Original workload: The database calls of the workload calls are even distributed between two stored procedures, spOrderStatus and spStockLevel.
Modified workload: In the same database, I created 100 stored procedures: spOrderStatus_1 through spOrderStatus_100, that are identical to spOrderStatus except the names. Similarly, I created another 100 stored procedures: spStockLevel_1 through spStockLevel_100, that are identical to spStockLevel except the names. I then modified the original workload to distribute the database calls evenly among spOrderStatus_1 through spOrderStatus_100 and spStockLevel_1 through spStockLevel_100.
With the modified workload, the same database was able to sustain the peak throughput for significantly higher load level (i.e. more users). The following chart summarizes the throughput behavior between the original workload and the modified workload on the test server:
I didn't specifically test how many identical but differently named stored procedures I had to create to see the impact on the transaction throughput. 100 for each stored procedure was picked randomly.
Note that this is a known technique and is used in some of the published SQL Server TPC-C tests. For instance, in the best SQL Server TPC-C results obtained on HP Superdome, 32 New_Order stored procedures are created with 32 different names but identical code inside. Stored procedures with different names but identical code are also created for several other transactions.
So, a rose by a different name may just smell better.