This is the blog of Jamie Thomson, a freelance data mangler in London
There’s a very nifty feature of T-SQL that, in my experience, not everyone knows about but is handy for generating surrogate key values as part of an ETL routine. I myself had to ask on Twitter what the syntax was because I couldn’t remember it (thanks to Matija Lah for reminding me) so I think its worth blogging in case I forget it again.
To summarize the feature, you can use a variable in the expression of the SET clause in an UPDATE statement while updating it at the same time. The syntax is:
@variable = column = expression
where expression can include @variable.
It will probably crystalise in your brain if you see it in action, so here is a little demo:
/*Create a table and populate with data*/
DROP TABLE [#DimProduct]; --will fail first time you run the script, so just run it again
CREATE TABLE [#DimProduct] ([ProductId] INT, [ProductCode] VARCHAR(10));
DECLARE @i INT = 0;
INSERT [#DimProduct]([ProductId],[ProductCode])VALUES(1,'qwerty'); /*Simulating an existing record*/
WHILE @i <10
/*Simulating some new rows for which we want to generate a a surrogate key*/
SET @i += 1;
/*Take a look at the data*/
SELECT * FROM [#DimProduct]
/*Generate surrogate keys using an UPDATE*/
DECLARE @MaxSurrogateKey INT = (
SET @MaxSurrogateKey = [ProductId] = @MaxSurrogateKey + 1 /****The clever bit****/
WHERE [ProductId] IS NULL;
/*Take another look at the data*/
SELECT * FROM [#DimProduct]
Here’s the output:
Notice that our UPDATE statement has generated contiguous surrogate key values for all rows that previously had a NULL surrogate key. Pretty neat huh? Well, I think so anyway!
In March 2012 I launched an initiative called AdventureWorks on Azure in which I hosted the AdventureWorks2012 sample database on Azure so that the SQL Server community at large could use it to get familiar with what
SQL Azure Windows Azure SQL Database had to offer. I asked the SQL community to support the initiative by donating money to keep it afloat and that community responded in spades to the point where last month I was able to make a donation to the War Child charity on behalf of that community. In the interim period I added more sample data to the pot, made Northwind available, and also data related to Olympics medal tables down the years.
Today AdventureWorks on Azure takes on a new guise, Steve Jones of SQL Server Central fame announced via a press release that hosting of the database is now to be taken on by his employer Red Gate Software. Here’s what Steve had to say in today’s announcement:
One of the goals of Red Gate Software is that we not only sell products to the SQL Server community, but that we also participate in the community. In line with our goals of participating in the community, we have agreed to host the AdventureWorks 2012 sample database on the Azure platform.
Red Gate has worked to transition this set of databases to our own subscription and will assume the financial responsibility for making this available for the community to use.
This is great news. I no longer have to worry about finding the funds to keep the initiative afloat and the community can feel comfortable that this will be around for the foreseeable future, so thank you to Steve and red gate for this undertaking. Special thanks must also go to David Atkinson at Red Gate who first suggested that they take this on.
I have high hopes that Steve & Red Gate will build upon this with more offerings for the community and equally I also hope I can stay involved somehow. Watch this space!
Its been a couple of weeks since SQLBits XI happened in Nottingham and I thought I’d jot down a few thoughts for posterity.
First the venue. I think its fair to say that the overall consensus was that the East Midlands Conference Centre was the best SQLBits venue these has been so far – I’ve been to more than half of them and based on my experience I would agree with that sentiment. The hotel especially was top quality – I was pleased with my room and the breakfasts were way better than the Travelodge standards I’ve become accustomed to at such events. Perhaps the great weather over the weekend helped lift the spirits but I’d say the bar has been set high, I hope future SQLBits conferences are at similar standard venues.
You may have noticed that many of my blog posts over the past year have been related to SSDT and my SQLBits sessions this time around followed that trend. I delivered a session on the Friday, jointly with Craig Ottley-Thistlethwaite, entitled “Real World SDDT” and also delivered my first ever day-long pre-conference seminar on the Thursday entitled “SSDT Database Projects from the ground up”. Delivering a pre-con was slightly petrifying and I spent a large part of the three months previous preparing for it; I hope it was worth it. I had 30 attendees which I was delighted with (especially given this was my first pre-con) and we established that the furthest anyone travelled was from Romania, though I’ll assume it wasn’t just so he could come to my pre-con . I had some pleasant feedback via Twitter afterwards from some of the attendees and I’m crossing my fingers that the official feedback is in a similar vein.
The session I did with Craig went fairly well I thought. I did the first 30minutes where I covered the basics of SSDT deployment from a high level before Craig ratcheted the complexity up a few notches by demoing some interaction between SSDT, MSBuild & Git – really great stuff for those who like to get into the nuts and bolts of this stuff. I was delighted that Craig was willing to do the session with me (in fact it was his idea) as this was his debut public speaking gig and I’m hoping its given him the desire to do more in the future.
On the Saturday my wife’s uncle, John Milne, came to the conference. John has been working in customer service for years but of late has decided that he wants a change of career and to that end has been studying an Open University course in IT. He told me he had particularly enjoyed the database-focused modules of his course and hence I suggested he come to the free Saturday of SQLBits to try and get a flavour of what the industry is all about and perhaps learn about some real-world experiences to add to his academic travails. By the end of the day John told me he’s had a fantastic time, learned a lot, and was hooked. Mission accomplished I’d say. John lives in Leeds so I introduced him to Chris Testa-O’Neill who helps to run the Leeds user group and John should be going along to some user group events in the near future – if you happen to meet him there please welcome him into the fold!
All that remains for me to say is a massive thank you to the SQLBits committee who do such an amazing job, all voluntarily, in putting this all together. Thank you Simon Sabin, James Rowland-Jones, Chris Webb, Darren Green, Allan Mitchell, Tim Kent, Chris Testa-O’Neill & Martin Bell. I also want to thank all of the volunteer SQLBits helpers that worked tirelessly on the weekend to make sure the whole thing ran smoothly.
Bring on SQLBits XII!
Regular readers of my blog might have realised that I am a huge advocate of subscribable calendars and the data format that underpins them – iCalendar. On 8th Feb 2012 I wrote a blog post entitled SQLBits now publishing all SQLBits agendas as an iCalendar where I told how the SQLBits committee had published the agenda of the forthcoming SQLBits conference in iCalendar format allow with instructions of how one could view the agenda on their phone. Back then I said:
…any changes to the SQLBits agenda (e.g. a room change) will automatically flow to your own calendar service and if you have that calendar service (e.g. Hotmail Calendar, Google Calendar) synced to your phone then the changes will automatically show up there too … That new SQLBits subscribable calendar lives at http://sqlbits.com/information/SQLBitsCalendar.ashx; note how it is not specific to a particular conference - subscribe to (don't import) that calendar and the agenda for future SQLBits conferences will automatically flow to you too.
Sure enough I took a look at the calendar on my phone today and saw this:
(That “Real world SSDT” session at 14:40 is being presented by Craig Ottley-Thistlethwaite and myself by the way. Hope to see you there!)
This is the value of subscribing as opposed to importing. The agenda for next week’s conference has already flowed to my phone without my having to do anything. This isn’t the same phone that I had a year ago either, by subscribing to it in my Outlook.com (nee Hotmail) Calendar those subscriptions are stored and flow onto any new phone as soon as I type in my credentials.
I have stated before that I believe subscribable calendars to be a transformative technology and this is why, I only had to subscribe to the calendar once and data that didn’t even exist back then simply flows into my calendar and thus onto my phone. If this interests you then maybe read how I think the same technology could be used to deliver BI data too at Thinking differently about BI delivery. And if you want to subscribe to the calendar yourself go and read the aforementioned blog post, that link again: SQLBits now publishing all SQLBits agendas as an iCalendar.
SQL Server Data Tools (SSDT) has a neat feature where you can add snippets into your scripts via the right-click context menu:
I’m finding it very useful indeed. The same feature exists in SQL Server Management Studio (SSMS) as well by the way:
One thing I really wanted to be able to do was create my own snippets for SSDT and I ventured to the SSDT forum to ask if it was possible. Turns out the answer is “yes” and Gert Drapers replied to my thread by providing an excellent run through of how to do it. What Gert’s post didn’t quite clarify is that if you follow his instructions then you have to manually edit the .snippet file that you created so that the language is set to SQL_SSDT:
(well, I had to do that anyway)
Once you do that you’ll be able to import the snippet into Visual Studio (Gert’s post shows you how) and thereafter your snippets will show up in the snippets menu:
Very handy indeed.
Earlier today I posted the following question on Twitter:
Foreign keys in a data warehouse. Yes or no? This discussion is looming at work, i know my position but am interested in what others think.
Specifically, I wanted to know if people were in favour of creating foreign keys from their fact to to their dimension tables or not.
To say it prompted a good response is a bit of an understatement, 38 responses so far and counting. Here are some of those responses:
It certainly seems to be an emotive subject and its clear (to me) that there’s no correct answer, just lots of opinions. That’s a good thing. The majority of responders appeared to be of the opinion that a data warehouse should contain foreign keys and that is my position too. In this blog post I want to outline why I believe that one should create foreign keys from a fact table to its dimension tables:
Of course, this is the main reason why foreign keys exist – to protect the integrity of your data. I see no reason not to use them for this purpose in a data warehouse. The main argument that I see going against is that with a sufficiently robust ETL solution it shouldn’t be necessary. That is true but I would counter with “how do you know that your ETL solution is suitably robust?” I don’t think its possible to anticipate every eventuality that may arise and for that reason I like the safety net that foreign keys provide. I liked Devin Knight’s response here, foreign keys breed confidence.
Communicate business logic
When I join a project that has a database in place the first thing I do is try and understand the data model – to do that I go and look at the foreign keys in that database. Understanding the dependencies between entities is crucial in any data model and the best means of communicating those is via foreign keys. If I encounter a database that is bereft of foreign keys then my heart sinks a little.
Foreign keys can, in some circumstances, be beneficial in improving query performance. Take a read of A Check and Foreign Key Constraint Improves Query Performance by Sarvesh Singh or Do Foreign Key Constraints Help Performance? by Grant Fritchey.
I am a big fan of generating ETL code where possible and foreign keys can be invaluable when doing so.
As I said there is no correct answer here so if you have any opinions, either agreeing or disagreeing, I look forward to reading your thoughts in the comments below.
Mark Stacey’s comment prompted an interesting digression into talking about surrogate keys for denoting unknown members and this is something I have strong opinions on too:
In short, I don’t like the practice of using “–1” as the surrogate key for an unknown member. My reasoning is simple, I don’t like giving meaning to something that is supposed to be meaningless. How then should we indicate which is the unknown member? I propose a single-row table that includes a column for each dimension table, each with a foreign key to the unknown member in the respective dimension table.
Moreover I don’t like the practice of starting surrogate key counters from 1; the first value available for the integer datatype in SQL Server is –2147483648 so why not start from that?
I discuss both of these issues in much more depth at Considering surrogate keys for Unknown members.
Again if you have any thoughts on these subjects please put them in the comments. If nothing else I find it both fun and educational to debate this stuff.
Update, Chris Adkin posted a comment below that contained a link to Microsoft's own guidance on building datawarehouses where it is stated:
"Many physical designs for data warehouses follow the star schema but do not completely specify the relationships between the fact and dimension tables, as mentioned earlier for foreign key constraints, for instance. Without the foreign key constraints explicitly specified, SQL Server must depend on heuristics to detect star schema query patterns."
Chris' take on this:
"So, assuming we are talking about a Kimball DW, there is most definitely value in using foreign key contstraints as this provides a fail safe for the heuristics getting it wrong."
I didn’t travel to the SQL Pass Business Analytics conference this week but I keenly followed what was going on via the #passbac hashtag on Twitter. Seemingly the big announcement was Geoflow Preview for Excel 2013, an add-in for Excel that visualises data over space and time (read more at Public preview of project codename “GeoFlow” for Excel delivers 3D data visualization and storytelling). Geoflow certainly looks compelling at first glance though I must say I found it rather strange that it got top billing given that Microsoft were talking about it at some SharePoint conference five months ago but nonetheless the keynote demo was apparently very impressive indeed. Here’s a screenshot of Geoflow:
I think Geoflow looks great, I really do; the questions I immediately had about it were:
- Can I share my Geoflow’d Excel workbooks and have people view them on SharePoint? Answer: No, there’s no SharePoint collaboration story.
- Is Geoflow part of Power View? Answer: No, its a separate installation.
- OK so I have to install it. I presume then that its available in the Office App Store given that’s the new model for distributing Excel add-ins? Answer: No, you have to download it from Microsoft’s download site.
So the big reveal from Microsoft at this conference was an Excel add-in that does something very very cool but had already been announced, is only a preview, doesn’t fit with Microsoft’s BI collaboration strategy, doesn’t use their modern distribution platform and isn’t part of their Excel-based data visualisation tool. Well thank god I didn’t stump up the cost of travel, accommodation, loss of income and time away from the family for that! Doubtless there were a lot of other good reasons to go to the conference but I would have been going with high expectations of news from Microsoft that is going to be compelling and help me sell Microsoft’s BI offering to my clients – Geoflow doesn’t do that, not by a long chalk.
What I was hoping for, nay expecting, was a concrete announcement regarding Microsoft’s mobile BI strategy. We first saw Power View demonstrated on an iPad at the PASS 2011 conference and I assumed that in the intervening eighteen months they might have built something we could actually install and play around with. Apparently not.
Microsoft are getting killed in this area. At my current client all the management folk walk around with iPads glued to their hands – this is the tool on which they consume information and Microsoft doesn’t have anything for them. I was working for a client two years ago that had just invested in a product called RoamBI because it enabled them to view Reporting Services reports on an iPad. Two years ago for pity’s sake, and Microsoft haven’t released anything mobile-BI-related since!
I recently stumbled across a nuance of the SSIS expression language which, when you think about, kinda make sense – but it does help to be aware of it. Its concerned with casting of datetime values using the SSIS expression language
Take the following expression:
That expression casts a datetime value into a string value. If I evaluate that with my OS Regional Settings set to English (United Kingdom) I see this:
If I set my OS Regional Settings to English (United States) I see this:
Note how that simple change to the regional settings has caused the result of my expression to change. This could have dangerous consequences; for example, if you are using the result of this expression in a dynamically built SQL statement (as I was) then one of two things will happen, either you will get the wrong result or you’ll get an error. Observe how, n my dynamically built SQL statement, I’m CONVERTing a string literal (which is constructed using the above expression) to a datetime value:
however with a simple change of my regional settings to English (United States) I see this:
and when you run that particular SQL statement in SSMS:
it blows up!
Definitely one to be aware of! Watch those Regional Settings and their affect on casting of dates in the SSIS expression language!
What should you do instead?
If you need a failsafe way of constructing a date that doesn’t rely on Regional settings then consider something like the following:
(DT_WSTR,4)YEAR( @[System::ContainerStartTime] ) + "-" +
RIGHT("0" + (DT_WSTR,2)MONTH( @[System::ContainerStartTime] ), 2) + "-" +
RIGHT("0" + (DT_WSTR,2)DAY(@[System::ContainerStartTime] ), 2)
That expression will build a date string with format YYYY-MM-DD (which is the ISO-ratified unambiguous way of representing a date) regardless of Regional Settings:
Just over 12 months ago I published a blog post entitled AdventureWorks2012 now available for all on SQL Azure. In it I explained that I had set up a Windows Azure SQL Database (then known as SQL Azure database) for the SQL Server community to use and hence familiarise themselves with the SQL Azure offering – I called this initiative “AdventureWorks on Azure”. Judging by the comments that were left in the table that I set up for that very purpose it seems that a number of people were in favour and made use of it:
The credentials to enable you to connect up and leave your own comment are in that same blog post!
Running a SQL Azure database is of course not free hence I asked for the community to support the initiative by contributing via PayPal to its upkeep. I hoped that there might be enough left over to make a charitable donation and as you have probably gathered from the title of this blog post I’m delighted to be able to say that that did indeed happen. A few minutes ago I made a contribution to War Child (the same charity that the SQL Server MVP Deep Dives book donated to) of £351.49
What do War Child do? In their own words:
We look forward to a world in which children's lives aren't torn apart by war.
There's 27 of us in an old false-teeth factory in north London.
We're trying to change the world.
You could say we're a small charity with big ambitions.
We're directly transforming the lives of tens of thousands of children. And we're campaigning to improve the lives of millions more.
Thank you to everyone that made a donation especially to the extremely generous person (who shall remain nameless – you know who you are) that contributed $105.
I would like to keep AdventureWorks on Azure going for another year at least but to do that I need more donations. If you would like to support this initiative for another year take a read of the instructions below that I have copied from last year’s blog post:
AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to email@example.com:
Any amount, no matter how small, will help.
Thank you once again to everyone that donated!
In the same way that a map is a natural choice for displaying data with a geography element to it I’ve long opined that a calendar is a natural choice for displaying data with a time element to it, my main output on this topic is at Thinking differently about BI delivery.
With that in mind I recently watched an interesting talk by James Whittaker entitled A New Era of Computing where he opined that that the era of searching and browsing for information is dying and we are now moving into an era of doing; its an interesting talk and if you have an hour to spare it might well be worth watching, you can find a video of the talk here: http://channel9.msdn.com/Events/ALM-Summit/ALM-Summit-3/A-New-Era-of-Computing
James gave an example of using a calendar as a canvas for booking a holiday and it really struck a chord with me. In this hypothetical example the steps of finding an appropriate time for a holiday, clearing the decks of all other appointments, finding flights and activities….they were all done within the context of a calendar. There were no 3rd party apps involved, no web pages – the calendar was the canvas upon which all of these tasks were done. To someone like me who strongly believes that calendars are massively underutilized as a means for displaying information the notion that a calendar could also be used to get stuff done was both liberating and illuminating.
If, like Scott Adams and I, you agree that calendars are criminally undervalued in an era of information discovery then you could do a lot worse than spend an hour watching James’ presentation. Really thought-provoking stuff.
One of the nice things about doing user group presentations is that when you’re putting the presentation together you invariably learn about features that were previously unbeknown to you; so it proved as I stumbled upon SSDT’s database drift detection features while researching material for my forthcoming pre-conference seminar SSDT from the ground up.
What is database drift?
You have probably experienced database drift, you just didn’t happen to refer to it as that. More likely you might have spluttered the following, perhaps sprinkled with a few expletives:
- “Who put these tables in my database?”
- “Who changed this view definition?”
- “Why is this guy in db_owner?”
- “Where has my stored procedure gone?”
In other words database drift can loosely be described as
stuff that appears, gets removed, or gets modified in your production databases that perhaps shouldn’t be
Detecting database drift using SSDT
If you’re using SSDT to manage your database schema then you probably consider the source code in your SSDT projects to be “the truth” and hence anything that appears in your databases that is not in your source code would be considered database drift.
In order to detect database drift using SSDT you must ensure that your database is registered as a Data-Tier Application. This can be done when you publish your database project (i.e. dacpac) by selecting “Register as a Data-tier Application”:
Thereafter you can check for database drift on subsequent publishes by selecting “Block publish when database has drifted from registered version”:
If you check that box and database drift has occurred then the publish operation will fail and you see an appropriate message in the Data Tools Operations pane, “Publish stopped. The target database has drifted from the registered version.”:
Clicking the View Report hyperlink displays the Drift Report which is represented in an XML file:
<?xml version="1.0" encoding="utf-8"?>
<Object Name="[View_1]" Parent="[dbo]" Type="SqlView" />
<ExtendedProperty HostName="[View_1]" HostParent="[dbo]" HostType="SqlView" Count="2" />
In this case a view called [dbo].[View_1] has been added to the target database. That view did not exist in the dacpac that was most recently deployed against the database thus the publish operation fails. Keeping one’s deployed databases as “clean” as possible is something that I am all in favour of so personally I think this is a pretty cool feature.
Generating a drift report from the command-line
The drift report can be generated by the command-line tool sqlpackage.exe. To do so you need to define:
- the action to be DriftReport
- a target server & database
- an output file
>SqlPackage.exe /A:DriftReport /tsn:"(localdb)\Projects" /tdn:"Database1" /op:DriftReport.xml
As far as I know there is no support for generating a drift report from SQL Server Management Studio (SSMS). I’m hoping that changes so that this feature gets more visibility.
If you have any comments stick them in the comments section below!
In March 2012 I published SSDT - What's in a name? where I lamented the mistakes that Microsoft made in the naming of SQL Server Data Tools.
…official documentation stating that SSDT includes all the stuff for building SSIS/SSAS/SSRS solutions (this is confirmed in the installer, remember) yet someone from Microsoft tells him "SSDT doesn't include any BIDs components".
I have been close to this for a long time (all the way through the CTPs) so I can kind of understand where the confusion stems from. To my understanding SSDT was originally the name of the database dev stuff but eventually that got expanded to include all of the dev tools - I guess not everyone in Microsoft got the memo.
Since then I’ve seen lots of questions pertaining to SSIS/SSAS/SSRS being posted on the SSDT forum on MSDN which, frankly, is the wrong place for them.
With the release of the SSIS/SSAS/SSRS project templates for Visual Studio 2012 Microsoft have attempted to clear up the confusion. Matt Masson from the SSIS product team attempts to explain in his usual jovial way:
note that the component was renamed – we added “Business Intelligence” to the end to distinguish it from the SQL Server Data Tools (Juneau). We now refer to it as SSDTBI, rather than “SSDT, no, not that one, the other one – you know, the one that comes with the SQL installation media, not the one you download”.
So to clarify, its now:
- SSDT – for building databases
- SSDTBI – for building SSIS/SSAS/SSRS solutions
Got it? Good!
That may all seem slightly confusing but its a darn sight clearer than it was SQL Server 2012 was released over a year ago. And if nothing else you have to be amused with Microsoft’s penchant for ever-lengthening acronyms, the last six-letter-acronym I can remember seeing was BODMAS!!!
This is something I’ve been meaning to blog about for ages but it kept slipping my mind, sorry about that!
Wee Hyong Tok from the SSIS product team has built a slide deck that covers some of the deep secrets about SSIS2012 including:
- Catalog deep dive
- Low level monitoring and troubleshooting
The slide deck is available to view online at https://skydrive.live.com/redir?resid=BB8E1FF2CE0CD545!252&authkey=!AMFavRXK0aVq314.
As its a slide deck its not quite as good as hearing from the presenter himself however there’s still some really useful information in here. If the internals of SSIS float your boat then click through and take a look (there are only 22 slides).
As you may have realised from much of my blogging over the past year or so I’m an advocate of using SSDT database projects for building database solutions on SQL Server. I have been using SSDT database projects a lot in that time and have come up with a checklist of things to consider when starting a new SSDT database project and I’ll be detailing that checklist below. I strongly advise you to consider making decisions about these items before you even write a scrap of code as invariably it will be more difficult to change later, especially if you have already deployed your database.
In no particular order here is my checklist:
By default SSDT will not provide a folder structure for new projects in which to store all your script files so you might want to consider setting one up yourself. I recommend not trying to create a large hierarchy of folders to start with as this will evolve as you go about building your database. Here’s a simple starter for ten:
If you don’t understand the importance of filegroups then read Thomas Larock’s recent blog post DATABASE FILEGROUPS: JUST LIKE SEATBELTS BUT WITH LESS CHAFING. Quite often your organisation will stipulate policies for what filegroups you should be using and I highly recommend that you set up filegroups in your database projects to match those policies sooner rather than later. If your organisation does not have any such policies then perhaps think about defining them yourself, Thomas lists a few considerations that may influence your decisions:
- separation of system data from user data
- larger indexes may benefit from their own filegroup
- archival of data can benefit from using dedicated filegroups as this will reduce backup maintenance tasks
If you follow my suggested folder structure above then put your filegroups into the “Storage” folder:
Above all, specify your default filegroup especially as many DBAs won’t be happy about you putting objects into the PRIMARY filegroup (which is the default). This is done by right-clicking on the project and selecting “Properties..” When there hit “Database Settings…” and set your default filegroup on the “Operational” tab:
If you intend to use the filestream feature of SQL Server then you can set the default filestream filegroup here too.
If you take only one piece of advice from this blog post make it this: set your collation before you write a single line of code (just trust me on this, OK). You’ll find this in Project Properties->Project Settings->Database Settings->Common
Again, find out if your organisation has any guidance on database collation.
Specify Target platform
SSDT allows you to deploy a project to SQL Server 2005, 2008, 2012 or Azure so it stands to reason that you should specify which you are intending to deploy to. You’ll find this in Project Properties->Project Settings:
If you follow the premise that “anything that SQL Server picks as the default is probably wrong” then you should probably think about changing the default schema which, by default, will be [dbo]. SSDT will create all new objects in the default schema (unless otherwise specified).
Generally I’m of the opinion that not accepting SQL Server’s defaults is a good idea if only because it forces you to think about these things and be aware of them; in the case of schemas it forces you to think about security and who should have GRANT or DENY permissions and on what.
Specify the default schema in Project properties->Project Settings:
Review other database settings
I’ve mentioned the most important database settings that you should look to change however you should also glance over the rest of the defaults that SSDT chooses for you to verify that they are valid for your project. Again, these are in Project Properties->Project Settings->Database Settings. a few I’d call out as being especially worthy of your attention are:
- Auto update stats (on by default)
- Recovery model (FULL by default)
- Transaction Isolation (RCSI off by default)
- Service broker (disabled by default)
SSDT always checks the syntax of your database code, that’s one of the main justifications for using SSDT database projects. However what it does not do (not by default anyway) is check whether the code you write might be considered good code. That’s what Code Analysis is for, your code gets checked to see if it adheres to well-understood good practices for SQL Server database development. For example, Code Analysis will check for use of “SELECT*” which is generally regarded as a bad thing.
Code analysis is turned off by default. I recommend that you head into Project Properties->Code Analysis and turn it on. Moreover I recommend that you check all the boxes under “Treat Warning as Error” – this forces developers to address issues that get raised by Code Analysis.
Add a banner to your Pre-Deployment script as I describe in Big label generator. This may seem rather unnecessary but it takes about 30 seconds and believe me, at some point you’ll be glad you did!
The demo project that I took some of the above screenshots from is downloadable from http://sdrv.ms/ZW0gNt and might provide a useful template for your own SSDT database projects.
If you have any more suggestions for my checklist please add them to the comments below!
When you install the database projects template of SSDT you get SQL Server Object Explorer (SSOX) installed as well. SSOX is a pane within Visual Studio and is the main enabler of the Connected Development experience that the SSDT team have attempted to provide.
SSOX provides some really cool capabilities that are not in SQL Server Management Studio (I hope to blog about them in the near future). In theory these capabilities make it possible for a database developer to spend all their time in SSDT (i.e. Visual Studio) thus making SSMS a pureplay DBA tool (this does of course depend on your definition of both a database developer and a DBA, but I’m not getting into that debate here).
With that in mind I have spent a few days trying to work without SSMS, preferring to live wholly inside Visual Studio instead. By and large I was able to do everything I needed to do from within Visual Studio however there were a few nuances about the experience that kept pushing me back to SSMS, I detail those nuances below.
SSOX combines the functions of SSMS’s Object Explorer and Registered Servers pane. I don’t mind either way of working but it does mean that there is no ability to group servers in SSOX like you can in the Registered Servers pane
In SSMS I regularly use the F6 keyboard shortcut to jump between the query, results & messages panes of a query window. No such keyboard shortcut exists in SSDT and they’ve already canned my request on Connect to get this fixed (even though it laughably has status “closed as fixed”).
UPDATE: See the comments below where Brett Gerhardi informed me of a different keyboard shortcut that does the same thing as F6. Actually its not quite the same, if you have multiple resultsets in your results pane then the behaviour is slightly different to F6 in SSMS - but that's not an issue you'll hot frequently.
The context menu in SSMS provides the ability to change a connection as well as connect and disconnect:
SSDT doesn’t have change connection and believe me, you don’t know how much you use a feature until its not there:
There’s also no hotkey to jump to “Connection” on the context menu like there is in SSMS (“C”) and I find that annoying too.
Those were the main annoyances that forced me back to SSMS. The lack of F6 was a major bugbear for me as I am a big keyboard shortcut junkie. If such things don’t bother you then you may be able to live in Visual Studio quite happily. If you have any similar experiences to share I’d be keen to read them.