Ok, so the title of this thread is very tongue in cheek, or in layman's terms a bunch of crap. However, it is the most common answer I seem to get from application vendors these days when dealing with performance issues, despite the fact that I can point out a dozen reasons why their application design/code is the problem. As I tell my end users all the time, I am sure that bigger hardware will hide the problem, but shouldn't we expect an application vendor to fix their junk design/code rather than throwing more hardware at the problem?
In my own experience in database and application design I have hit up against more than a few problem spots where performance problems could have easily been fixed by adding bigger hardware. However, when I started with SQL Server, hardware was still quite expensive, and for this reason, all attempts were made to resolve the problem by refactoring database design, indexing strategies, and/or application code to optimize the code well ahead of considering the purchase of new hardware. Lately with declining hardware costs, the lower cost of 64 bit computing, and the scalability of hardware, I've seen this dynamic change both with vendors and with home grown applications.
It seems as if the talent pool for writing highly performant code is being drained by the ease of availability of high power hardware. In my own personal experiences, I recently purchased a home test server for under $1K that had 4-3Ghz Xeon processors and 16GB RAM. For that kind of cost, I have to personally admit that adding hardware can be a much cheaper alternative to spending the time to learn how to solve the code problems. However, personal experience has taught me that hardware can only get you so far, and eventually, the cost to scale up hardware exceeds the cost to fix the actual problem.
I experienced this recently in working on the Extended Events Addin for SSMS. One of my pain points in the original application that really came to light when I ported it over to an SSMS Addin was the time it could take to query data from the asynchronous_file_target. My first design of the Target Data Viewer was shortsighted in that it wasn't designed to handle the amount of data that could be stored in the file target, but was instead built for the memory resident targets. The problem I ran into was that the UI would hang while reading the data. This was easily fixed by refactoring the code to use a BackgroundWorker, but a new problem popped up, a nasty OutOfMemory exception when reading data that on disk was under 100MB in size.
To put things into perspective, I have a Duo-Core 4GB RAM laptop that was running Visual Studio 2008, and SSMS in debug mode for development purposes only. The problem was that as I read the data, SSMS would eventually consume over 2GB or memory, exceeding the usermode VAS limit for a 32 bit system and exception out, so a bit of debugging was required to figure out what exactly was happening. Admittedly, I am not a .NET expert, and this took a bit of time to figure out, but after a few days of work, I was able to refactor the code so that the same file data that was causing a OOM exception now requires less than 50MB of memory. Even better, it lead me to understand better how to refactor other code, so that the impact of the SSMS Addin was almost negligible when performing most operations.
How does this relate to SQL Server? Well, a number of ways actually. I am not necessarily the best at performance tuning, but I can certainly hold my own, and I've helped a number of people/companies solve problems with bad performing code. Finding ways to improve performance happens to be a personal love of mine though I rarely get to do it these days. Understanding design patterns in SQL Server is just as important as understanding design patterns in object oriented code. In general when I get to look at badly performing code, I often find the same problems, and I have a short list of things I look for up front before I dive into more complex troubleshooting. Correlated subqueries, user defined functions that perform data access, table valued parameters holding thousands of rows of data, and functions on the column side of a predicate in a query seem to be the most common killers to TSQL performance that I've seen, though there are other problems I see as well.
So what exactly did I expect to accomplish with this blog post, well, not all that much really, except to make you think before you scale up hardware to solve a performance problem. If you are an ISV, this might be a quick fix, but eventually you are going to meet the customer that will stress your application beyond what is reasonable for hardware and associated software costs.