THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Merrill Aldrich

Flow control in T-SQL Scripts

When many people, including me, began their early T-SQL efforts, the magic two letter GO statement was a bit mysterious. What was that for? Was it just decoration sprinkled through your scripts? What’s the difference when two SQL statements are separated by GO or not?

Lately, though, I have noticed quite a lot of production code that still misses key elements of flow control in T-SQL scripts, which can cause unexpected results – especially in the realm of mishandled run-time errors. I was just perusing a script from a very well known vendor and I was struck by the fact that if it failed for some reason, the error handling was entirely wrong, and the script would not have actually done what its author seemed to expect, based on the code. Luckily it ran without issue.

Articles and blog entries do exist on error handling in T-SQL; especially good are the series by Erland Sommarskog and Itzik Ben-Gan’s book Inside SQL Server 2005: T-SQL Programming. There’s great stuff on handling errors and transactions in deployed stored procs, for example. So as not to tread over the same material, I want to point out some techniques in this post specific to script flow control – that is, when you run a T-SQL or SQLCMD script in Management Studio, how do you correctly trap errors and stop or continue execution at the right place in your script?

Did you know that if you raise an error in a script, the script will not typically stop, but instead will continue on its merry way, at either the next statement or the next batch? More importantly, did you know that that is likely to happen even after you roll back an explicit transaction? If not, please read on! In this first installment I will talk about “old school” flow control, which works against both new and older versions of SQL Server. A future post will describe how the Try/Catch structure added in 2005 fits in. (Hint: if you can use Try/Catch, go for it; it's a lot better.)

GO is Vital

In order to make this clear I have to lay out a couple of basic terms, so that the later explanations make sense. First, a batch is a series of one or more SQL statements that come one after the other, and that are only separated by the GO batch terminator at the end of the sequence. (B.O.L. describes how GO is not really a T-SQL statement at all, but is in fact just a separator, defined at the client, that causes batches of statements to be sent to SQL Server; each batch is treated as a single unit at the server, and is compiled and executed as one “thing.”) What is most important for our purposes is that GO is vital in flow control when there’s an error to be handled, because it can separate a series of SQL statements into distinct blocks. A batch of statements acts as a unit at the server in important ways.

Second, I’ll use the term script to mean a collection of one or many batches that are typically in the query editor together, get sent to the SQL Server in sequence when you execute, and perhaps are stored in a text file. Each batch in a script is separated by the batch separator (GO).

For example:

-- Batch 1
PRINT 'This is the first statement in the first batch';
PRINT 'This is the second statement in the first batch';
GO -- End of Batch 1
-- Batch 2
PRINT 'This is the second batch';
GO -- End of Batch 2

A profiler trace when running this example will reveal that it executes in two discreet batches, even when the batches contain multiple individual statements. Worth noting: At the server, not only are the batches separate events, but there is very little relationship between them at all, other than that they came from the same client connection. They are executed almost as if they were two separate little programs. This accounts for why you can’t persist a variable “across” a GO statement; if a variable is defined in one batch, it is eliminated at the end of that batch’s execution at the server. In most respects, the next batch bears no relationship to the previous one. There are exceptions, including connection-level settings and transactions.

From a flow-control point of view, then, I would like to point out two issues:

  • How does one correctly stop processing inside a batch, if a run-time error occurs?
  • How does one stop processing a whole script, if an error occurs in one batch, perhaps near the beginning of a long and complex script?

These two problems imply that two things need to happen: what Erland Sommarskog calls Batch-Aborting events, and what could be called Script-Aborting events. Batch-Aborting events are those that prevent a batch from executing the statements following one that caused an error. Script-Aborting events prevent the execution of following batches from the script.

Here is where the batch-script distinction becomes important: batches execute at the server, while a collection of batches, chained together, can only be controlled by the client. Put another way, a batch-aborting event has to be a server event: you have to direct the server to stop executing the batch in the middle. A script-aborting event has to be a client event, otherwise, collectively, the system will resume execution at beginning of the next batch that the client submits.

Perhaps RETURN needs to Become Popular

I don’t often see the RETURN statement in T-SQL scripts, which is a shame. The simplest way I know of to stop the execution of a batch, at the server, is with RETURN.

RETURN “Exits unconditionally from a query or procedure. RETURN is immediate and complete and can be used at any point to exit from a procedure, batch, or statement block. Statements that follow RETURN are not executed.” – B.O.L

The RETURN statement, in a script, will direct the server to stop executing the current batch, without doing any further work. It can be combined with error checking to cause a batch to stop for a run-time error:

DECLARE  @pretendError INT;
SET @pretendError = 1;

PRINT 'This is the first statement in the first batch';

IF @pretendError != 0
  RETURN;

PRINT 'This second statement in the first batch will not execute';
GO -- End of Batch 1

-- Batch 2
PRINT 'This is the second batch';
GO -- End of Batch 2

When executed, the above sample will execute the first Print statement, then exit Batch 1, but will resume with the next batch the client submitted, so it will execute the last Print statement. The resulting output is:

This is the first statement in the first batch
This is the second batch

RAISERROR() is great, but unfortunately it does not have this behavior:

-- Batch 1
DECLARE  @pretendError INT;
SET @pretendError = 1;

PRINT 'This is the first statement in the first batch';

IF @pretendError != 0
  RAISERROR ('Raise Error is great but does not stop the batch', 11, 1);

PRINT 'This second statement in the first batch will STILL execute';
GO -- End of Batch 1

-- Batch 2
PRINT 'This is the second batch';
GO -- End of Batch 2

The output for this example is:

This is the first statement in the first batch
Msg 50000, Level 11, State 1, Line 6
Raise Error is great but does not stop the batch
This second statement in the first batch will STILL execute
This is the second batch

So, we can RAISERROR(s), but the remainder of the batch and script will still execute, unless we’re very careful to include some other structure like a RETURN, or a GOTO statement paired to a labeled error handler:

-- Batch 1
DECLARE  @pretendError INT;
SET @pretendError = 1;

PRINT 'This is the first statement in the first batch';

IF @pretendError != 0
  BEGIN
    RAISERROR ('Raise Error is great but does not stop the batch', 11, 1);
    GOTO handle_badness -- This is the thing that stops batch execution
  END

PRINT 'This second statement in the first batch will now be skipped';

GOTO batch_end

handle_badness:
      -- Perhaps clean up after the error
      PRINT 'This is where the error handling code could go';

batch_end:
      -- Do nothing

GO -- End of Batch 1

-- Batch 2
PRINT 'This is the second batch';
GO -- End of Batch 2

In any case, note that the following example does NOT do what one might imagine, though I have seen this in production code. It doesn’t work correctly because it has transaction handling but not flow control:
-- Incorrect code - transactions implemented, but broken, because
-- flow control is missing:

DECLARE  @pretendError INT;
SET @pretendError = 1;

BEGIN TRANSACTION

      INSERT INTO test1 (testcolumn) VALUES ('First Row');

      IF @pretendError != 0 ROLLBACK;

      INSERT INTO test1 (testcolumn) VALUES ('Second Row');

      IF @@ERROR != 0 ROLLBACK;

      INSERT INTO test1 (testcolumn) VALUES ('Third Row');

      IF @@ERROR != 0 ROLLBACK;

COMMIT
GO

With the fake error I set near the top, a novice might think that none of the inserts succeed; in fact the second two inserts do succeed and commit, because nothing is done to stop the batch from continuing after an error, even with the ROLLBACK statements. The following example also doesn’t do what one might expect, because of the same problem:

-- Incorrect code - transaction handing does not work
-- because RAISERROR doesn't stop the batch:
DECLARE  @pretendError INT;
SET @pretendError = 1;

BEGIN TRANSACTION

      INSERT INTO test1 (testcolumn) VALUES ('First Row');

      IF @pretendError != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
        END

      INSERT INTO test1 (testcolumn) VALUES ('Second Row');

      IF @@ERROR != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
        END

      INSERT INTO test1 (testcolumn) VALUES ('Third Row');

      IF @@ERROR != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
        END

COMMIT
GO

RAISERROR() is great, but the batch will keep executing at the next statement. Erland’s article referenced above covers this quite well, so I won’t dwell on it. Suffice it to say, RETURN can be useful even in scripts:

-- One more correct method - transaction handing
-- AND flow control implemented:
DECLARE  @pretendError INT;
SET @pretendError = 1;

BEGIN TRANSACTION

      INSERT INTO test1 (testcolumn) VALUES ('First Row');

      IF @pretendError != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
            RETURN;
        END

      INSERT INTO test1 (testcolumn) VALUES ('Second Row');

      IF @@ERROR != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
            RETURN;
        END

      INSERT INTO test1 (testcolumn) VALUES ('Third Row');

      IF @@ERROR != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
            RETURN;
        END

COMMIT
GO

If RETURN and/or GOTO are too picky or involved for you, you can also investigate the connection setting XACT_ABORT. XACT_ABORT will rollback a transaction and abort the batch in the event of a runtime error. Problem is, it's OFF by default in Management Studio query connections. Consider this test table:

CREATE TABLE [dbo].[TransTest](
   [id] [int] IDENTITY(1,1) NOT NULL,
 CONSTRAINT [PK_TransTest] PRIMARY KEY CLUSTERED 
(
   [id] ASC
)
GO

I can simulate runtime errors by attempting to insert into the identity column. If XACT_ABORT is off, then a failed insert will be skipped, but the code following will still execute:

-- XACT_ABORT is off by default:
BEGIN TRAN
   INSERT INTO TransTest (id) VALUES ('wrong-o')
   INSERT INTO TransTest DEFAULT VALUES
   INSERT INTO TransTest DEFAULT VALUES
COMMIT
-- At this point we have one error AND two successful inserts AND a committed transaction

-- To halt and roll back, use xact_abort:
SET XACT_ABORT ON
BEGIN TRAN
   INSERT INTO TransTest (id) VALUES ('wrong-o')
   INSERT INTO TransTest DEFAULT VALUES
   INSERT INTO TransTest DEFAULT VALUES
COMMIT

There is some long-standing debate about which method, GOTO or RETURN in the midst of code, is uglier and/or harder to maintain. Both are hopefully rendered obsolete by Try/Catch – a topic for a future post. But style aside, any solution is better than missing this issue altogether!

So, we can exit a batch in the event of an error by either issuing a RETURN or by using GOTO to hop to the end of the batch, or with XACT_ABORT. Importantly, both those are server-side constructs and work only in the context of one batch. That is, it’s not possible to direct the server to GOTO a line in another batch, in the same script, because at the server each batch is completely independent. One is not accessible from the other. Further, we typically can’t abort a batch by calling RAISERROR() – though the SQL Server itself can and does sometimes issue batch aborting errors.

On Second Thought, Maybe RAISERROR() isn’t so Bad

That brings me to control of multiple batches. RETURN will stop the server executing a batch, but since the server does not control multiple batches issued from the client, a script will then typically resume execution at the first statement in the next batch – that is, directly after the next GO. Often I don’t want any of the remaining script to run after an error, batches or no batches. Here’s a simple example of the problem:

-- Batch 1
DECLARE  @pretendError INT;
SET @pretendError = 1;

PRINT 'This is the first statement in the first batch';

IF @pretendError != 0
  RETURN; -- This will end batch 1, but batch 2 will still run!

PRINT 'This second statement in the first batch will not execute';
GO -- End of Batch 1

-- Batch 2
PRINT 'I don''t want this to run, but it will';
GO -- End of Batch 2

The trick is that control of more than one batch – that is, making an error in one batch prevent the following batches from running – is a client-side problem, not a server-side problem. For that, the best solution I know of is SQLCMD. In SQLCMD mode, it’s possible to have the client note that an error was raised in a batch and then stop running the script instead of continuing with the next batch. Here’s the same example, tweaked a little to stop the entire script on error:

:ON Error EXIT

-- Batch 1
DECLARE  @pretendError INT;
SET @pretendError = 1;

PRINT 'This is the first statement in the first batch';

IF @pretendError != 0
  BEGIN
      -- This RAISERROR() now signals SQLCMD
      -- to stop the whole script:
    RAISERROR ('Something Bad Happened', 11, 1)
    RETURN; -- This ends Batch 1
  END

PRINT 'This second statement in the first batch will not execute';

GO -- End of Batch 1

-- Batch 2
PRINT 'This is the second batch, and we want to prevent it from running';
GO -- End of Batch 2

Output is:

This is the first statement in the first batch
Msg 50000, Level 11, State 1, Line 9
Something Bad Happened
** An error was encountered during execution of batch. Exiting.

We need to do a total of four things to get this behavior:

  • Execute in SQLCMD mode in Management Studio (or use SQLCMD scripting in some other way)
  • Add the statement :On Error exit to the top of the script, so that raised errors will direct the client to stop the whole script, not just move on to the next batch. Raise errors with severity 11 through 19, so that the error is not treated as purely informational. I generally just use 11. B.O.L. seems to be wrong about this, based on my testing. Note that there is also special treatment for Error State 127 in some environments.
  • Still implement RETURN or GOTO (or Try/Catch) flow control inside each batch to stop the batch itself. If this is not present, then the server is NOT directed to stop the current batch, even though the client will not run the next batch. What happens if the batch flow control is missing is that the server will run all the remaining statements after the error, in the current batch, and then the client will stop the script.

All together, this will both direct the server to stop mid-batch in case of an error, and will direct the client to “notice” that an error was raised with RAISERROR() – severity above 10. SQLCMD will then cause the remainder of the script to halt. All this needs to happen irrespective of BEGIN TRANSACTION, ROLLBACK, and COMMIT, which are not flow-control structures.

All Together

Here’s a complete, if contrived, example:

:ON Error EXIT

-- Batch 1
DECLARE  @pretendError INT;

SET @pretendError = 1;

BEGIN TRANSACTION

      INSERT INTO test1 (testcolumn) VALUES ('First Row');

      IF @pretendError != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
            RETURN;
        END

      INSERT INTO test1 (testcolumn) VALUES ('Second Row');

      IF @@ERROR != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
            RETURN;
        END

      INSERT INTO test1 (testcolumn) VALUES ('Third Row');

      IF @@ERROR != 0
        BEGIN
            RAISERROR ('Something Bad Happened', 11, 1);
            ROLLBACK;
            RETURN;
        END

COMMIT

GO -- End of Batch 1

-- Batch 2 (Should NOT execute in case of failure in Batch 1)
PRINT 'This second batch should not run'
GO -- End of Batch 2

Breaking this down: when the example runs, in SQLCMD mode, the first batch will not insert any rows – because of the combination of BEGIN TRANSACTION, ROLLBACK, and, importantly, RETURN. Without the RETURN statements, the first insert statement would be rolled back, but the next two would succeed as the batch continues to execute even after the error was detected. With the RETURN statements, the server is directed to stop immediately, with the effect that control seems to “skip down” to the next line after the GO statement. That’s an imperfect analogy, but that is roughly the effect from the client side.

Next, the combination of setting :On Error exit AND Raiserror() statements with severity 11 will cause SQLCMD to also stop execution of the script, which would otherwise continue at the next batch. This means that Batch 2 will be prevented from executing after there is an error in Batch 1.

This example is simplified to illustrate those main points, and obviously the error condition is faked, so real production code might not look exactly like this. You may want to add still more logic such as capturing the value of @@error in a local variable, or checking @@trancount, two additional techniques that are covered well elsewhere. The main take-aways are that transaction handling and flow control are separate problems, and both need to be managed explicitly. Flow control inside batches and flow control between batches are also separate problems, each of which has a slightly different solution.

[Shout out to Simple Talk and to Aaron Alton for a code beautifier that works for a blog noob like me.]

Published Friday, July 24, 2009 9:17 AM by merrillaldrich

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Ben Thul said:

Sometimes, you don't have the ability to alter the script you're running to put the ":ON Error EXIT" in it.  I think that the -b flag passed to sqlcmd accomplishes the same thing.  Great article!

July 24, 2009 1:22 PM
 

noeldr said:

What about :

BEGIN TRY

....

END TRY

BEGIN CATCH

...

END CATCH

July 24, 2009 3:53 PM
 

merrillaldrich said:

@Ben - thanks!

@noeldr - You're completely right; that'll be in the next installment

July 24, 2009 11:46 PM
 

Simon said:

How do you stop a script that is alrewady been running for too long?

October 15, 2009 10:02 AM
 

Jack said:

Nice in-depth article! But it doesn't solve my problems completely.

One opens a script in sql server management studio, forgets to enable the sqlcmd mode and runs the script.

The ":ON Error EXIT" gives a "Incorrect syntax near ':'.". The first batch isn't run. But the following batches are still executed.

February 2, 2010 9:29 AM
 

Jack said:

Oh duh...

When each batch starts with ":ON Error EXIT" no batch will ever execute when management studio isn't in sqlcmd mode. Our scripts are generated automatically, so adding the statement is no problem at all.

Thank you!

February 2, 2010 9:42 AM
 

Doug said:

Excellent post! What I've been looking for to help me institute more rigor in my queries. Thanks!

April 2, 2010 1:58 PM
 

Jan Hoogendoorn said:

May be the solution below can help.

SET NOCOUNT ON

CREATE TABLE ##BatchTbl (Status INT PRIMARY KEY)

INSERT INTO ##BatchTbl VALUES (0)

GO

/* 2 type of errors can occer in below statement:

(1) Errors that cause the batch to stop e.g.

the User detail is a synonym that connects to a non existed object

(2) Errors that don't cause the batch to stop e.g. the User detail table is empty

Both errors cause the execution to stop completely

*/

Print 'When following statements fail inspect the databasename in the Synonym Statements'

Update ##BatchTbl Set Status = 3

IF EXISTS (Select * from UserDetail) Update ##BatchTbl Set Status = Status-1

IF EXISTS (Select * from StudentFile) Update ##BatchTbl Set Status = Status-1

Print 'Synonym statements are OK'

Print 'When following statement fails inspect the databasename in the Use Statement'

IF EXISTS (Select * from HSA_Synchronisations)  Update ##BatchTbl Set Status = Status-1

Print 'Use statement is OK'

GO

-- completely stop the batch processing when errors occur

If (Select Status from ##BatchTbl) > 0

RAISERROR ('Errors occured.  Processing is stopped', 20, 1) WITH LOG

GO

Print ' When there are errors I should not see this'

Regards, jhoogendoorn@educator.eu

September 22, 2010 8:32 AM
 

Tobias Brandt said:

Fantastic article!

This is the first place where I've seen anyone make any sense on this topic. And what a comprehensive summary with very clear and concise examples.

Until I read this I never understood what GO was for and how this related to transactions.

I really can't praise this article enough. Well done sir!

September 30, 2010 2:43 PM
 

merrillaldrich said:

@Tobias - thank you, that's great to hear

October 2, 2010 4:49 PM
 

Merrill Aldrich said:

So, last day of the year, and I can see many people are in a reflective mood. I don’t usually deep dive

December 31, 2010 8:27 PM
 

Jerry said:

Are nested loops possible. I mean a switch in an if statement

June 14, 2012 2:38 AM
 

shravan said:

hi,

  i have go statements in my sql query.i need to incorporate the query in vb.net code but go statements are not recognized in vb.net code.how should i go about in converting the query so that i could include it in vb.net code and execute it successfully?

October 12, 2012 3:43 AM
 

shravan said:

please someone answer my question;its very urgent  

correction its vb code not vb.net

thanks

October 15, 2012 5:02 AM
 

merrillaldrich said:

Just submit each batch (between the 'GO' statements) separately, in sequence.

October 15, 2012 10:27 AM
 

Brustmopf said:

Wow, this blog post might be a bit old, but it certainly is very clear !

I'm working in a non-IT industrial environment and got sort of parachuted in the design of a production database with an sql server back end and I've been browsing the web to understand how in hell does it work, and this is the first piece of information which actually explained it in a way that makes it as if a 10yo could do it (which in SQL skill is roughly my age).

Thanks a million for this.

If you do have a post on "temp tables vs table variables: pros and cons" I'm buying...

June 20, 2013 10:57 PM
 

Merrill Aldrich said:

Some time ago I wrote a post about old-time T-SQL flow control . Part of the idea was to illustrate how

July 26, 2013 12:26 PM
 

Fix Raiserror T-sql Windows XP, Vista, 7, 8 [Solved] said:

November 21, 2014 2:57 PM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement