THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

The Rambling DBA: Jonathan Kehayias

The random ramblings and rantings of frazzled SQL Server DBA

5 Things SQL Server should get rid of

Paul Randal (blog | twitter) started a new meme last night with his blog post "What 5 things should SQL Server get rid of?" A few bloggers have posted their top 5 lists, so here is mine.

Creating Foreign Keys without mentioning Indexes

This is probably a performance tuning consultants favorite.  I know that Greg Low has blogged about this in the past (see Indexing Foreign Keys - should SQL Server do that automatically?) and back then, and now I still do think this should be an option that can be overridden, but the default behavior should be to create or at least offer to create an index on foreign key columns.  In most cases where foreign keys exist, the columns are used in JOIN operations while querying the database and a missing index often results in scan operations that increase IO and CPU utilization.

Defaults During Installation that lead to Performance Problems

Lets face it, hardware has changed and what used to be acceptable for default options no longer apply on today’s newer server’s leading people to installations and configurations that give the product and newer versions a bad name.  Some of the defaults that lead to problems are:

Max Server Memory

How hard would it be to have the installer detect that SQL Server is being installed onto a 64 bit server with 16 GB RAM and have it either set a MaxServerMemory value of 12GB, or even 14GB for the instance being installed?  The best case would be to have an intelligent setup that provides the person installing SQL Server information about the option, how it affects SQL Server performance and scalability and some guidelines on how to properly configure it. 

Max Degree of Parallelism / Cost Threshold for Parallelism

I am sure that I am going to get feedback on this, but the defaults for these let the system run wide open and often lead to significant performance issues on more modern servers where people get 8, 12, and now 24 cores for next to nothing, and haven’t taken into account the type of IO demand that a DOP 8 parallel process might generate.  I wouldn’t let a teenager jump into a 1969 Mustang with a 428 Super Cobra Jet engine without some kind of a warning about they type of power and potential dangers of stomping the gas to the floor.  The question is not a matter of whether its possible for a system to run a 24 core parallel workload and not have bottlenecks, I am sure that there are systems appropriately configured with the right hardware to do it, its a matter of what’s realistic for the majority of the customer base out there?  I’d prefer that the default be a lower threshold that has to be tuned up by someone with experience to gain performance rather than being wide open leading to problems.

Local Security Policies

Other items that should be defaulted in setup are adding the Service Account/Service SID to the Perform Volume Maintenance local security policy so that Instant File Initialization is used for data file allocations.  I wouldn’t extend this to default the account into the Lock Pages in Memory security policy unless it was provided as an option in the setup, possibly in the same place where you should set the MaxServerMemory values.

Free BI Marketing

While you might get Reporting Services, Analysis Services and Integration Services as a part of SQL Server, there is no such thing as a free lunch.  If you are making heavy use of these things, you probably won’t have them running on the same server as the Database Engine which means that you have to pay for additional licensing.  If you want high availability for Reporting Services you have to configure a scale out farm, it can’t run on your SQL cluster, so much for being included/free.

However, beyond that, this kind of marketing creates misconceptions in management about what a SQL Server DBA should know, after all it is still SQL Server right.  Nothing could be further from the truth, and while I know some guys that are really good with all aspects of the SQL Server product stack, that’s just not the case for the majority of people out there, especially with rapid pace of changes to the product.  I am a DBA, I am good at managing, administering, designing and troubleshooting relational databases, and its taken me six years to get to where I am.  While some of these skills overlap with the BI space, there is still a substantial difference between a production support DBA and a BI Architect/DBA.  We think differently, design differently, and I haven’t got the slightest clue how to make a pretty report, I do functional if you want pretty hire a designer because you’ll never get it from me, that provides the wizbang layout that makes executives happy.

Lack of SSMS Add-in Support

Lets face it, SSMS is built on top of Visual Studio, and Visual Studio has Add-in Support, so why doesn’t SSMS?  Don’t get me wrong, you can certainly write an Add-in for SSMS, but being someone who has actually done this, I can tell you that without an example to look at, or someone (a quick shoutout to Mladen Prajdic (Blog | Twitter) for the email assistance getting the Extended Events SSMS Addin coded) to help point out mistakes along the way, it’s stupid how hard it can be to do.  The Mladen’s SSMS Tools Pack shows just how useful Addin’s can be in the product.

The inability to turn knobs for Advanced Configruations

One of the things about SQL Server that I’d like to see dropped is the so called “self-tuning” nature for how the system works at times.  I know this makes SQL Server more available to the masses, and allows smaller shops to run without a fulltime DBA on staff, but it also limits what is possible and sometimes leads to its own problems like Plan Cache Bloat in the early releases of SQL Server 2005 on 64 bit hardware.  There should be a way for a knowledgeable DBA to tune things like Procedure cache, System Cache and Data Cache sizes to optimize the server for the database its running on.  If I have a crappy third party vendor application that uses all adhoc SQL against the database, forced parameterization and optimize for adhoc workloads only gets you so far.  If I know that plan reuse is unlikely in most cases, I should be able to set the plan cache to limit it from wasting memory that could otherwise be used for caching data, reducing disk IO.  Another place I’d like to see a knob added is for block size for both storage and caching, similar to how Oracle provides it.


I know that is five, but I have another one for good measure:

Misspelled columns in System Tables

Granted that the sys.sysprocesses view is marked as deprecated, but the loginame column has been one of those nagging little misspellings that has annoyed me for years now. 

Published Tuesday, May 11, 2010 11:12 PM by Jonathan Kehayias
Filed under:



Scott R. said:


Great post and continuing the theme!

Under the section on “Max Degree of Parallelism / Cost Threshold for Parallelism”, you mention:

“I’d prefer that the default be a lower threshold that has to be tuned up by someone with experience to gain performance rather than being wide open leading to problems.”

I’m guessing that you mean [different default thresholds] (plural) – I will use brackets for emphasis – no bolding allowed in replies for highlighting:

-  A [lower] default threshold for “Max Degree of Parallelism” that has to be tuned [up] … – versus the default value of 0 for maximum parallelism across all available processors

-  A [higher] default threshold for “Cost Threshold for Parallelism” that has to be tuned [down] or [up] … – versus the default value of 5


Scott R.

May 12, 2010 4:05 AM

Glenn Berry said:

Good list Jonathan

May 12, 2010 7:44 AM

Bart Czernicki said:

Couldn't disagree with you more on the "FREE BI Marketing".

For multiple terabyte implemenatations and where you need to scale out to 100 concurrent are 100% correct.

However, we have implemented BI solutions using the full SQL Sever BI stack on a single sever on over 100 clients and it works great for data warehouses in the sub 150 gig range, with MOLAP cubes in the 10 gig range.  Multi-million organizations can easily run the full BI suite on a single server and take advantage of it being a "FREE value-add".

May 12, 2010 1:26 PM

danh said:

Nice article. Have had serious arguments about loginame before, some people like it !

Anyway, it's always a dangerous game pointing out spelling mistakes...especially with a heading "Advanced Configruations"  = )

May 14, 2010 11:59 AM
Anonymous comments are disabled

This Blog


Privacy Statement