THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a data mangler in London working for Dunnhumby

Response to “SSDT limitations”

A commenter on my blog post Considerations when starting a new SSDT database project asked me to comment on his blog post SSDT Limitations. i wrote a response but when I tried to post it I got told it was too long:

image

so I’m posting my response here instead. I’d suggest taking a read of the commenter’s post before reading on.


"SSDT does not handle multiple file groups"
I haven't experienced this. I'm pretty sure I've used SSDT to deploy multiple filegroups. Can you clarify what is meant by "does not handle"?

"SSDT wants to control everything"
I'd restate that as "SSDT gives you the option to control everything". If you don't want to put all your database objects into your SSDT project, then don't.

"Production will not look like UAT. For SSDT, everything looks the same."
Yes, this is a problem. SSDT espouses a mantra of "build once, deploy anywhere" but this breaks down if you want something to be different on each environment - security is the obvious thing here. My approach is to specify roles in the database project that have permissions assigned to them and then later (perhaps in a post-deployment script) add users into those roles. I describe this more fully here: A strategy for managing security for different environments using the Database Development Tools in Visual Studio 2010

"This means storing in version control production:
-Logins & passwords
-app role passwords
"
If you're storing passwords in source control then that must mean you're using SQL authentication instead of Windows authentication - that's your decision and I don't see how that is a shortcoming of SSDT. If you don't want to store them in source control - don't. SSDT has mechanisms that allow you to specify values at deployment time (search for SSDT sqlcmd variables). Bottom line, if you're storing passwords in source control then more fool you - nothing in SSDT forces you to do this.

"It is generally preferred to at least have the option to inspect what will be deployed"
Correct. And that is why (as you pointed out) SSDT has "the option of either publishing directly to the database or generating a script to run for the deployment."

"Unfortunately for SSDT the upgrade generates a single file. In practice this means is that if the database upgrade fails for any reason recovery is rather complex. You'll need to work out where the upgrade script failed and then what action you will take from there."
True. There are things you can do to mitigate this:
-Test your deployments on a copy of your production database first
-Do small, frequent releases rather than the big bang approach of infrequently deploying large, monolithic sets of lots of changes

"The product feels rather like a beta release, bringing even a simple project into production will expose some pretty serious limitation."
Beta release (to me) implies that it doesn't behave the way its been designed to. I put it to you that it behaves exactly the way its been designed to, it just so happens that that you don't like that behaviour. Fair enough if that's the case, you're not alone in that regard. There are other options for doing deployments if you don't like the way that SSDT espouses. Perhaps look at migrations (https://msdn.microsoft.com/en-gb/data/jj591621.aspx?f=255&MSPPError=-2147217396) or the old-fashioned method of manually writing change scripts.

"However the largest issue is more fundamental: the approach where you are expected to trust the magic box to generate a valid upgrade script."
Yes, you do have to put your trust in SSDT. If you're not happy doing so, don't do it, or just use the option to generate the deployment script and not run it. I personally am happier putting my trust in a repeatable algorithm rather than human beings who tend to be error prone. That's just me, your mileage may vary.

"It is very easy to generate scripts that will fail to upgrade the database successfully."
There are certain changes that SSDT is not easily able to cater for (e.g. adding a non-nullable column to a table that contains data) however I'm not sure that constitutes "very easy to generate failing scripts". Personally I think SSDT is better at generating working deployment scripts than a human being is because its quicker and less error prone. As I've alluded, its imperative that you test your deployments before pushing to a production database.

You asked me to comment and I have done. Clearly we have a difference of opinion about SSDT and that's OK. I hope you can find a way around your complaints because I honestly believe SSDT is the best tool out there for managing SQL Server deployments - I hope you come to the same conclusion one day.


Regards
Jamie

Published Friday, February 20, 2015 1:22 PM by jamiet
Filed under:

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

James Snape said:

Actually, if it fails half way through then next time you run it the diff processing will detect the changes already applied and skip them - try that with hand coded scripts.

What I would say though is:

1. Never change any managed schema objects in a pre-deploy script (copying data to a temp [not #temp] table is OK) because the schema scan happens before predeploy.

2. Push everything else custom into your post deploy and make it idempotent.

February 20, 2015 7:55 AM
 

jamiet said:

Amen to that James

February 20, 2015 8:02 AM
 

George Walkey said:

Im getting used to the V12 SSDT version and like the validation rules the tool brings to SQL versioning and project control. Checking in a database is now possible.

We have found many broken references in both in-house and commercial software. I *believe* you can control what objects SSDT brings in to the project and you can skip them, but then you will have one template (with live logins for example) for Prod and another for dev with broken references to those missing logins etc.

Nice to able to load a 2008 DB and re-target it for 2014 or Azure, showing any errors, is a great pre-flight migration check.

I havent even gotten to the Code Analysis bits, but the Errors and Warnings, once cleaned up help to create a "buildable" SQL database.

Granted, SSDT cant do Query Plan evaluations or many of the other things SSMS is still better at, but its another great tool. SQL DBAs should learn this one.

My only gripe is schema changes blow away your data at Publish, but dont we already handle that manually?

February 20, 2015 8:26 AM
 

Armando Prato said:

Here would be my response:  THEN DON'T USE IT.  Very short and to the point.  

I personally love SSDT as its declarative nature frees me up to do more interesting things than creating install and update scripts.

George:  schema changes shouldn't blow away any data unless you're talking about dropping existing columns?

February 20, 2015 8:03 PM
 

David Cameron said:

Hi Jamie,

It's good to hear a response to this.

Replying to some of your comments:

- File groups - this was ~2 years ago. My recollection was that no matter what file group was used this would be changed to a PRIMARY. This might have been resolved.

- Version everything - SSDT would pick up things like master keys and users. Clearly this is an issue as they should not match across enviroments. See comment on environments

- logins / passwords - I'm glad you have the luxury of controlling exactly how the production environment operates with respect to authentication. This is not a luxury that everyone shares. The passwords are part of the problem, the user account are just as much of an issue, you would not want test accounts having access to producton. There might be better ways to implement this using sql cmd variables (which we did use as part of the deployment), however this didn't resolve the underlying problem of SSDT trying to deploy test database auth into production.

- single file - under the environment at the time releases were infrequent. This was not our preference, but was the reality. However we also found that the scripts kept growing due to the need to write custom post deploy scripts (sql cmd variable based).

- beta release - The file groups and account management in SSDT gave me that strong. impression. While there were issues that were generated by the environment, there were some fundamental issues with SSDT.

To be clear, this was 2 years ago. The product may well have moved on. At the time it definitely did not suit our environment.

Another blog post I wrote might be of interest, comparing the SSDT approach with another option, dbUp:

http://www.uberconcept.com/2012/10/database-version-control.html

Lastly, I'd trust a repeatable process that gets run multiple times most of all. As an alternative approach I've used something like dbUp, run against a backup of production database to verify migrations (with the whole process automated).

There are other issues that come into play with SSDT:

- how to you run the deployment from a build server or a deployment server? Would this require visual studio on the build server.

- This tends to require a higher level of access than might be permitted. You'll end up either providing the developers access to production databases or give the build/deploy server access across all environments including production. This could well be an issue and generates issues with separation of concerns.

I think SSDT might work well in some environments but not all.

February 22, 2015 5:57 AM
 

Bill Anton said:

Fwiw, I shared similar sentiments the first time we used SSDT on a project back in late 2012. In retrospect, a lot of the issues (at least for me) had to do with a poor understanding of the "end-game" and a true "enterprise-style" database release process.

For example...

- importance of having a "true" test/integration environment where a PROD backup is restored before executing the release via SQLPackage.exe+dacpac+pub.xml giving everyone confidence in the release before it actually happens.

- understanding the concept and importance of "migration"/"data movement" scripts and under which scenarios these may be important (e.g. to avoid costly data movement if/when making certain schema changes to large tables).

- exposing all environment-dependent code paths/branching logic via SQLCMD variables in the pub.xml (build once, deploy many)

- finally wrapping my head around the meaning of "idempotent" (though with proper test/integration enviornment and becoming familiar with the quirks in release script generation this becomes less of an issue)

At this point, I could never imagine going back and I dread any project where the team doesn't want to use SSDT.

February 23, 2015 9:49 AM
 

Graham Smith said:

I have a blog post series on how to implement continuous delivery with TFS and Release Management which incorporates deploying a database managed under SSDT. It doesn't address all SSDT circumstances of course but might be useful for whetting the appetites of anyone wondering where to start with all this:

http://pleasereleaseme.net/continuous-delivery-with-tfs

One core principle I try and stick to is to separate out environment specific considerations and apply them on an environment-by-environment basis after deploying the core database changes.

February 24, 2015 8:03 AM
 

Enders said:

"Unfortunately for SSDT the upgrade generates a single file. In practice this means is that if the database upgrade fails for any reason recovery is rather complex. You'll need to work out where the upgrade script failed and then what action you will take from there."

True. There are things you can do to mitigate this:

-Test your deployments on a copy of your production database first

-Do small, frequent releases rather than the big bang approach of infrequently deploying large, monolithic sets of lots of changes

Not true

On the publish script check INCLUDE TRANSACTIONAL SCRIPTS and you get one big transaction. If it fails the transaction is rolled back. I am not sure if the pre scripts are included. Post scripts are not executed since it is a failure

March 12, 2015 8:25 AM
 

jamiet said:

Very good point Enders, thanks for sharing.

March 12, 2015 8:34 AM
 

AdrianD said:

I could not agree more on your point Jamie regarding testing the deployment before moving into UAT/Production.

In my opinion SSDT projects should be treated in exactly the same way as managed code or an integration services project for example.  

Maintaining a separate build profile for each environment does help eliminate difficulties surrounding maintaining Logins, users, role memberships etc. However, if the intention is not to rebuild a database from scratch, then maintaining users outside of SSDT could be an option.

March 15, 2015 3:21 AM
 

Mike said:

I realize this is an old post, but I'm curious as to what the "migrations" are that you're referencing.

The link doesn't seem to work.

August 18, 2015 4:06 PM
 

David Atkinson said:

I'm curious about what that broken migrations link was as well. Did it describe and discuss the migrations-driven methodology taken by Flyway, Liquibase and ReadyRoll?

May 24, 2016 10:45 AM
 

Bharat said:

@George Walkey

My only gripe is schema changes blow away your data at Publish, but dont we already

handle that manually?

I tested this with VS2015 and working fine. In test I just changed schema in-between then created one table, inserted some data into it, then get back to previous schema and continue. Although this was done through post script, I have added transaction outside whole script(making prescript to postscript in single transaction) and committed at last making everything or nothing commit.

June 19, 2017 10:23 PM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Privacy Statement