THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

The Rambling DBA: Jonathan Kehayias

The random ramblings and rantings of frazzled SQL Server DBA

Today’s Performance Tip: Views are for Convenience, Not Performance!

I tweeted this last week on twitter and got a lot of retweets so I thought that I’d blog the story behind the tweet.

Most vendor databases have views in them, and when people want to retrieve data from a database, it seems like the most common first stop they make are the vendor supplied Views.  This post is in no way a bash against the usage or creation of Views in a SQL Server Database, I have created them before to simplify code and compartmentalize commonly required queries so that there weren’t 10 different stored procedures with the exact same select statement in them.  However, problems can arise, often in the form of poor performance, when views are reused for purposes other than their original intent, especially when views become nested inside of each other, or when views begin being joined together. 

At my last job I took the time to expand the actual TSQL query that resulted from view expansion of one of the most commonly used views in a vendors system to show them how ridiculously inefficient that particular view was at retrieving information.  In its expanded form, two tables had 12 references since they were included in multiple views that were joined together in 4 nested layers of views.  This was the same vendor that in written form made the comment “You should only do reporting off our built-in views, because views are faster than querying the tables directly.”  Kimberly Tripp got quite the laugh out of that comment when I posted it on twitter as did the Oracle DBA.  So how does this relate to today?

Well, today I was working on planning my next week of database consolidations and where things will be moving when I got a page from OpsManager that one of my servers was running out of disk space.  A quick look at the alert email and I can immediately tell that disk array in trouble belongs to tempdb, and the server is a development server, so at least its not production, but something is still wrong here.  The tempdb disk array on this particular server has roughly 142GB of space available on it at any given time.  OpsManager is set to page out if the space available on this particular disk falls below 5% free, so something chewed through the initial 8GB that is allocated to tempdb and then another 135GB of disk space.

I pop into SSMS, connect to the server and do a quick query of sys.dm_exec_requests and find a session that has been running for just over an hour.  Quick query of sys.dm_db_task_space_usage proves that this is in fact the offending session.  I tracked the session back to a user and host, and then pulled the statement text using sys.dm_exec_sql_text before killing the session, freeing up the space internally in tempdb, but leaving it very large on disk, a problem I fixed later at night with an instance restart allowing tempdb to be resized to its startup configuration.  Could I have used shrink, probably, but i don’t advocate using shrink in general and this was a test system so a later restart was a feasible and best solution to the problem.

Now that the initial problem has been solved, I decided to look at what kind of query could use 143GB of tempdb when the database being queried is only 75GB in size.  The vendor for this database has the best naming schema around is consistent in their naming schema and all views start with V_, so it was easy to see that the query in question was joining multiple views together.  I referred to this particular query as a “frankenquery” on twitter because it really was a conglomeration of code that resulted in a really bad thing. 

There were some other significant problems associated with the code being executed like numerous column side conversions to datetime columns to get the date only and time only for comparisons, but that is another story that has already been told by fellow blogger Aaron Bertrand (Blog | Twitter) in his Bad Habits to Kick series.  In addition, the same code was reused multiple times in derived tables that were joined to an outer instance of the same query, resulting in some significantly sized worktables that resulted from multiple full table scans caused by the column side conversion of the datetime columns.  Interestingly enough, Aaron also has a post that I only saw while writing this that discusses Creating the Ubber View, in which Rob Farley states that the Query Optimizer will be able to eliminate unnecessary joins during execution, which unfortunately isn’t what I am seeing in the execution of this badly performing SQL Statement in my server.  I am getting full view expansion not once, but three times for the execution of this query.

When I dug into things a bit further, only 2 tables were actually being used by the query, one in one view, and the other in the other view.  The views had a combined total of 16 tables joined in, all seen three times in the estimated plan of the frankenquery initially run on the test server.  The code is actually much simpler, cleaner, and more efficient when the two tables are used instead of querying against the views that were joined together.  Correcting the column side datetime conversions and eliminating the redundant executions of the same query buy using a CASE’d aggregation from a single execution, the resulting query executed in under 2 seconds with the expected result set.  This has been my general finding when using views, they exist for convenience, but aren’t generally the best thing for performance when used for requests outside of their original design intents like this case.  There were certainly other inefficiencies in the code being executed in this case that made for a very bad scenario.

Published Monday, May 17, 2010 7:38 PM by Jonathan Kehayias

Comments

 

Ian Yates said:

I agree with what you've said.  I have seen both situations - where the uber-view is quick and where it runs like treacle!

If there are foreign key constraints, and they are used to join the tables, AND sql server trusts that the constraints are still accurate, then it can sometimes avoid the need to evaluate unneccessary inner joins.  See http://forum.cornerstone.se/blogs/sql/archive/2008/01/12/non-trusted-constraints-and-performance.aspx for a proper explanation.

Something to keep in mind anyway :)

May 25, 2010 6:18 AM
 

Kendra Little said:

This is a great illustration, both of the troubleshooting steps, and of why of the problem with over-reliance on views.

Curious if you always use autogrowth on your tempdb files?

At my place of work, on the boxes we have which serve internal adhoc user queries, we tend to grow out tempdb for what we've baselined as "normal/reasonable" usage, and then disable autogrowth on the datafiles. This is specifically so we avoid the shrink/restart scenario. We do have a process running from another server that checks active spids and catches plans and  tempdb usage information (which we would create with MDW if we were all 2008, but we aren't yet), so that helps track down the "what happened?" question when we get an alert that tempdb filled.

OTOH, for our back end data processing systems which don't serve user reports, we leave autogrowth enabled on tempdb files.

May 31, 2010 11:21 PM
Anonymous comments are disabled

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement