THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Andy Leonard

Andy Leonard is an author and engineer who enjoys building and automating data integration solutions. Andy is co-host of the Data Driven podcast. Andy is no longer updating this blog. His current blog is

Database Deployment

This blog has moved! You can find this content at the following new location:

Published Thursday, February 18, 2010 8:00 AM by andyleonard

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Kevin Griffin said:

Our approach has been to design/build the database on a test server, and then use SSMS to build scripts to deploy to production servers.

Additionally, we've also been know to make a backup of a clean test server and deploy that to production.

February 18, 2010 8:19 AM

Chris Howarth said:

We use "SQL Compare" to generate a single release script. To generate the script, the source side of the comparison is pointed at a folder containing script files (one file per object, held in TFS), and the target side is pointed at an empty folder. SQL Compare then generates a synchronisation script.

Reference tables are populated using SQL statements (created during development) that are manually added to the synchronisation script.

At deployment time, a new database is created on the target server and the single script executed.

February 18, 2010 8:32 AM

@foreachdev said:

Keep change scripts organized by releases with a naming convention that describes order. (ddl_0_scriptdescription.sql are run first dml_0_scriptdescription.sql is run after all ddl) Programability objects like stored procs and UDFs are stored separately in folders name after the object types and are written in a way where they are rerunnable.

Deployment/staging strategy:

1)Backup database.

2)Run change scripts DDL scripts then DML.

3)Run programbility scripts.

Of course you have a deployment script that automates the whole process.

February 18, 2010 8:39 AM

Jim Pendarvis said:

This is easy. I get someone that know what they are doing and hire them to do it.

February 18, 2010 9:10 AM

Preethi said:

We generally write scripts for schema changes and keep them in source control system.  

Stored procedures too written there separately.

We write roll back scripts for schema changes, and we have written a tool which will do the backup for stored procedures.

We have written another tool which will take all the necessary stored procedures, and patches and bundle them

We finally release them first to QA environments and finally to production.  

February 18, 2010 9:10 AM

Jake Morrison said:

I use a folder structure that has a top level folder for each version of the database (1.0, 1.1, 2.0).  Then, under each version, there are sub folders for the different types of objects (SPs, Views, etc).

I create one big script that handles all of the table creation and/or manipulation.  I have found that using separate scripts for each table or table operation has always resulted in problems maintaining the proper order of execution.  All other objects (Functions, Views, SPs, etc) are scripted separately and placed in their respective folders.

I then use an install script that uses some of the SQLCMD syntax to import the other script files and execute them in the proper order.  This install script also has SQLCMD variables at the top of the script that can be easily modified to point the script to a different database.

Also, the table modification script is in a transaction and each statement first checks to make sure the transaction is still active.  This way, if statement 14 fails, the transaction rolls back and statements 15 - 1000 do not get executed.

Although this is a bit of work to initially set up, it provides for easy maintenance and very easy, repeatable deployments.  I just need to go into the folder for whatever version I want to deploy, and run install.sql using the SQLCMD utility.  If I need to jump multiple versions, I just need to run the install script for each intermediate version.

February 18, 2010 10:14 AM

Jonathan Gardner said:

As any good DBA my answer is going to be "it depends"

When I deploy a 'new' database from a custom project I will design and build it in Visual Studio.  When I build the package VS writes a deployment .sql script that I will run from SSMS.

February 18, 2010 10:28 AM

Peter said:

We're working on getting VSTS DB Pro (aka Datadude) up to speed. That will let us branch our DBs appropriately and just push those branches up the chain. So far it's worked reasonably well, but we haven't really worked on the branching yet. Once we get that done, life should be a little easier.

February 18, 2010 11:59 AM

Dan said:

1. Create Snapshot (we have ongoing backups /tlogs )

2. SQL Compare Script into SSMS and run

3. Push any scripts that modify data changes (lookup tables, etc.)

4. Save scripts into SVN.

For the most part for small patches. Depending on the project, I may also make a full backup of the db, or detach / attach a db. Of course, we push to our STAGING server first and test.

February 18, 2010 12:19 PM

Creighton said:

Visual Studio for database professionals + TFS for source control has been an absolute pleasure to work with to handle database build and deployment.  I have worked with Redgate and handrolled solutions before, but this set of tools have enabled us to spend much less time producing builds and more time developing.


February 18, 2010 12:48 PM

Dave Wentzel said:

We have a Scripts folder in our source control system that follows agile and iterative database development practices.  Subfolders are organized like 1Tables, 2Indexes, 3ForeignKeys,4Functions,5Views,6Procedures,7ModelData etc.  

Within each subfolder are .sql files that must be "rerunnable", meaning I can run them on any db/machine and when the script is complete all dbs (new or upgrade) look identically.  

We then run a vbscript file that cycles through each folder merging all of the .sql files alphabetically into one large file, then executes it using sqlcmd.  Output is logged and errors will always be visible by searching for "Msg ".  If there is a dependency issue (FunctionA.sql depends on FunctionB.sql) we simply rename the depending file such that it executes first (00FunctionB.sql). Note that the folders are "by module" and are executed in order (hence 4Functions before 6Procedures since a missing function will cause a procedure not to compile).  

The trick to "rerunnable" files is obviously to ensure you run code like "IF EXISTS/NOT EXISTS ...DROP/CREATE/ALTER".  But this can be confusing for developers, especially when modifying a non-clustered primary key to be clustered.  To simplify we create "property procedures" that handle making a database "object" look exactly like its parameters/properties being passed, without needing to know DDL nuances.  

For instance.  Let's say Bar column in table FOO is declared varchar(20) NOT NULL in release A, varchar(200) NOT NULL in release B and nvarchar(256) NULL in release C.  

My property procedure calls looks like this

EXEC AlterCol @tbl='Foo' , @col = 'Bar' , @datatype = 'NVARCHAR', @len = 256, @AllowNulls = 'No'

Now it doesn't matter if the db this runs against is new or being upgraded from releaseA to C or B to C, the stored proc handles the necessary ALTER TABLE ALTER COL command.  It will also handle dependent FK's or indexes that will restrict certain ALTERs.    

February 18, 2010 2:03 PM

Rafael Salas said:

I use VS 2008 DB projects (Data dude) and keep the projects under source control in TFS. DB projects have some rough areas specially when handling set-up/static data and permissions; but versioning and deployments are powerful. I will be showing some of that in one of my sessions of SQL Saturday # 33 in case you are interested.

February 18, 2010 5:24 PM

Rob Farley said:

A series of scripts that I run using powershell and sqlcmd.

February 19, 2010 5:38 AM

Zack Jones said:

We backup database from server A, copy backup to server B and restore it.

February 19, 2010 8:09 AM

rjbook said:

Currently I have a "semi automated" database build process which is a combination of SQL Scripts and a SQL job running the OSQL command to call the scripts (yes I know I am outdated for not using Powershell). We are starting to look at the Build capabilities of Visual Studio 2010 now that it has gone to Release Candidate and hope to have a "fully automated" database build someday soon.

February 19, 2010 11:54 AM

AaronBertrand said:

I am in a relatively unique environment using a multi-tenant model (many databases identical in schema).  Schema and module changes are made in QA environment, then deployed to a staging database using a comparison script generated by SQL Compare.  When the changes pass all tests then the same script is run against the production databases.  There are several approaches to this but the one I'm used to is SQLFarms Combine... it can run the same script against all 500+ databases and does may of them concurrently (my setting is 10 threads).

I assume source control etc. is a separate question.  We recently switched from Visual SourceSafe to SVN and I must admit I have not yet completed the transition.

February 19, 2010 1:40 PM

Neeraj said:

We use SVN Subversion which helped a lot to deploy DB code to every environment (In our case Dev\Test\Stage\Live)..We developed automated process that runs SVN Diff command to get the difference between 2 SVN branch and updates local SVN workspace..Finally we run OSQL to deploy DB code to all the servers..This deployment process even sends mail to developer with error information if script failed to deploy..

February 20, 2010 3:20 PM

granadaCoder said:

Every db object (table,view,procedure,trigger,function,etc) is kept in a separate file.

All files are kept in source control (svn).

Each file is coded to use SQLCMD variables.

DatabaseNames, UserNames(for db security) are NEVER hard coded.

Then I keep a "manifest" file.

I use the manifest file to define Ordinality of the scripts.

Then I have an xsl file which creates SQLCMD calls (using the xml below).

Each SQLCMD call logs results.  Errors are also persisted to file.

The SQLCMD calls are arranged in order of the "RunOrdinal" below.

I can create different manifest files based on the package I am releasing.

If all I am doing to updating all stored procedures, my manifest file contains references to each stored procedure file.

If I need a "hot fix" for one procedure, the manifest file will have a reference to the single procedure .sql file I need for the hotfix.

My CruiseControl.NET actually performs the xml/xsl transformation for me.  And zips the results.

Basically, I use the pragmatic practice of "create code that creates code".

QA tests the results of the xml/xsl transformation.

Here is a sample manifest file (at the end of this post).

The values of the RunOrdinal below don't actually matter.  Except that they form some kind of order.

I create the schema, the table, the stored procedure in a certain order.  The table needs the schema.  The stored procedures needs the table. Etc, etc.

I write all procedures in an idempotent manner.

Yes, I *already* know that the word idempotent looks a lot like the word impotent.  You're hilarious. I get it, its really funny.    But they are not the same word.

And I use the practices from Andy's presentation (and online articles) on Test-Driven+Development.  (Shout out to Trinug User Group! and Trinug Code Camp!).

So when sqlcmd executes the files, there are nice logs of what happened and did not happen.

Since using this approach, the drama in my life has decreased 1000 percent.  I go home "on time".  The scripts are QA tested, so scripts that get executed in production have already been tested.

Let me say it again, the db deployment drama in my life has essentially ceased to exist in my work life.

Unfortunately, I cannot share the xsl code.

It took about 3.5 days to write it.  I have saved myself 100 times that many hours.  Best thing I ever did.

Getting it working with CC.NET took another 2 days.

Best thing I ever did.

Did I mention I go home on time?

I long abandoned "right click/script out" methods of deployment.

Again, this is my opinion.

I can only judge the results (in my limited experience).

And since I went to SqlCmd, I've never had to stay late at work.

I hope the xml will show up.

<?xml version="1.0" encoding="utf-8"?>



<DeploymentFile enabled="true">

       <FileToExecute>.\Create Scripts\schema\ReportingSchema.sql</FileToExecute>



     <DeploymentFile enabled="true">

       <FileToExecute>.\Create Scripts\ddl\dbo\Department.sql</FileToExecute>



     <DeploymentFile enabled="true">

       <FileToExecute>.\Create Scripts\usp\dbo\uspDepartmentGetAll.sql</FileToExecute>



     <DeploymentFile enabled="true">

       <FileToExecute>.\Create Scripts\DataPopulate\BaselineData\DepartmentPopulate.sql</FileToExecute>





Here is a coded version just in case:



can be replaced by the appropriate xml character



*left_bracket*DeploymentFile enabled="true"*right_bracket*

       *left_bracket*FileToExecute*right_bracket*.\Create Scripts\schema\ReportingSchema.sql*left_bracket*/FileToExecute*right_bracket*



     *left_bracket*DeploymentFile enabled="true"*right_bracket*

       *left_bracket*FileToExecute*right_bracket*.\Create Scripts\ddl\dbo\Department.sql*left_bracket*/FileToExecute*right_bracket*



     *left_bracket*DeploymentFile enabled="true"*right_bracket*

       *left_bracket*FileToExecute*right_bracket*.\Create Scripts\usp\dbo\uspDepartmentGetAll.sql*left_bracket*/FileToExecute*right_bracket*



     *left_bracket*DeploymentFile enabled="true"*right_bracket*

       *left_bracket*FileToExecute*right_bracket*.\Create Scripts\DataPopulate\BaselineData\DepartmentPopulate.sql*left_bracket*/FileToExecute*right_bracket*





August 5, 2011 5:55 PM

Leave a Comment


This Blog



My Latest Book:

Community Awards

Friend of Red Gate

Contact Me


Privacy Statement