THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Linchi Shea

Checking out SQL Server via empirical data points

Performance impact: too many try/catch blocks may be harmful in SQLCLR

If you have many try/catch blocks in your .NET code and your code execution actually passes through them, you should expect a performance hit. That’s intuitive and no surprise.

 

What is surprising is the extent to which you may experience severe performance degradation when multiple users are executing a piece of SQLCLR code (e.g. calling a SQLCLR table valued function) that contains many try/catch blocks. In other words, there may significant and adverse concurrency implication if you use too many try/catch blocks in your SQLCLR code.

 

Here is an example to prove it!

 

I have implemented two SQLCLR table-valued functions: tvf_GetDataWithTryCatch() and tvf_GetDataWithoutTryCatch(). These two functions return exactly the same resultset: 2000 rows with 100 columns of integers. The functions simulate the case where the integer values are out of bound (i.e. greater than the maximum value allowed for the integer datatype, which is SqlInt32.MaxValue in the attached C# code), and when that happens, the functions return NULL instead.

 

There are many ways to check whether a value is equal to or greater than SqlInt32.MaxValue. As the name implies, tvf_GetDataWithTryCatch() implements the check in a try block and returns NULL in the corresponding catch block. The tvf_GetDataWithoutTryCatch() function, on the other hand, implements the check with a C# conditional operator (?:).

 

When there is no background concurrent load

 

The tvf_GetDataWith TryCatch() function is expected to be slower than the tvf_GetDataWithoutTryCatch() function because of the inherent overhead with a try/catch block, and tests show that is indeed the case. On my rather antiquated PC with 2GB of physical memory, the following query

 

select * from dbo.[tvf_GetDataWithTryCatch]()

 

returns all the 2000 rows in about 8 seconds, whereas this query

 

select * from [dbo].[tvf_GetDataWithoutTryCatch]()

 

returns in about  1 second. Consistently, the function doing 200,000 try/catch’es is about eight times slower than the function doing 200,000 conditional operations. So far so good! No surprises!

 

When there is background concurrent load

 

When I have 10 users running tvf_GetDataWithoutTryCatch() nonstop against the same SQL Server 2008 instance in the background, the tvf_GetDataWithoutTryCatch() returns in about 2 seconds consistently. So running additional load of tvf_GetDataWithoutTryCatch() does slow it down a bit, but not a lot.

 

Now, when I have 10 users running tvf_GetDataWithTryCatch() nonstop in the background against the same SQL Server 2008 instance, the performance of tvf_GetDataWithTryCatch() degrades dramatically. This time it takes about 76 seconds to return all the 2000 rows. That is about 10 times slower than it is when there is no concurrent load. And it is about 38 times slower than tvf_GetDataWithoutTryCatch() under the same 10-user background concurrent load. Note that tvf_GetDataWithoutTryCatch() returns in about 2 seconds regardless whether the 10-user background load is calling tvf_GetDataWithTryCatch() or calling tvf_GetDataWithoutTryCatch().

 

If you think 38 times slower is bad, in real production environments I have seen far worse concrrency impact as a result of too many try/catch blocks.

 

With the attached C# code, you can run the tests yourself to see the behavior. Your results may differ in some details, but you should see a similar pattern. If not, post back a comment and let us know.

 

Published Wednesday, July 06, 2011 1:40 AM by Linchi Shea

Attachment(s): SQLCLR.zip

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Piotr Rodak said:

Hi Linchi, good post, thanks.

I would like to point out though, that it is not the presence of the try...catch block, but rather the fact that the exception is thrown that is causing the performance hit. The CLR engine has to do a lot of things when throwing and catching exceptions is involved. This is building stack trace, exception class, unwinding call stack etc. This is why relying purely on exceptions as a way of controlling the flow of the application is not a good idea.

To have try...catch block is however a >good< idea, because it gives you opportunity to recover from failure or at least log the error message and context of the execution.

Regards

Piotr

July 6, 2011 6:06 PM
 

Linchi Shea said:

Hi Piotr;

I might not have been clear. But you are correct that it's not the mere presence of the try/catch blocks. If the code execution doesn't walk into the catch blocks, there is no issue.

Also note that the post is not about the fact that performance may degrade, but more about the adverse concurrency impact when many exceptions are indeed thrown.

July 6, 2011 6:48 PM
 

Aaron Bertrand said:

Great post Linchi, trying to remember if it was you or someone else who wrote about this very thing a year or more ago, but it's always worth repeating... especially with data to back it up.

Cheers,

Aaron

July 6, 2011 9:29 PM
 

Adam Machanic said:

I think a good way to think about this is that exception handling is an absolute requirement, *but* it should not dictate the flow of the program--it should be used for exceptions, and only kick in when there is an exceptional situation. If there is an "error" of some sort that is expected to happen on a very regular basis--such as NULL handling--there should be explicit code in place to handle it rather than letting an exception get thrown.

I also don't think this post applies just to SQLCLR--it applies across the board in .NET, and most if not all other languages that support structured exception handling.

July 6, 2011 9:30 PM
 

Linchi Shea said:

Adam;

I'm running some tests to see if I see the same concurrency behavior outside of SQLCLR. Will post the results. You are absolutely right that using eceptions to handle normal data conditions such as NULLs is bad. Unfortunately, this was not even made up. It was literally a result of troubleshooting some prod code. But it's always good to see how things behave when pushed to 'extreme' even when people should not go to that extreme.

July 7, 2011 10:01 AM
 

Linchi Shea said:

My previous post showed a simple test that appears to suggest that you may experience significant performance

July 7, 2011 11:54 PM

Leave a Comment

(required) 
(required) 
Submit

About Linchi Shea

Checking out SQL Server via empirical data points

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement