THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a freelance data mangler in London

  • Want to prove something with data? Aggregate it!

    Recently Microsoft announced that they’re releasing a new XBox which was apparently big news and was reported on at length around the globe. One article in particular on the BBC News website caught my attention because it contained what I thought to be some really bogus reporting

    “from first appearances tech writers were more impressed by what they saw than the public posting on social media.

    Twitter:
    Every time you say Xbox One, you feel like to have to add the word "new" in front of it to avoid confusion.
    @MurraySwe
    The one thing I came away with was that Microsoft has trouble with numbering. @FinleyNathan
    Am I the only one who thinks the new Xbox One looks like a VCR? @dandahlberg

    Now perhaps its true that reaction on social media was negative but the BBC’s attempt to justify that claim by cherry-picking some random tweets from a sea of millions is, in my opinion, misleading lazy and pointless. And I said so:

    image
    https://twitter.com/jamiet/status/337151547953778688

    So here’s an idea. Instead of picking random tweets what if the BBC instead attempted to gauge opinion by analysing the overall sentiment of those tweets instead. Its not even that difficult, there are free services such as What does the internet think that do this for you. Here’s a screenshot I took from that service at the same time as I was reading the BBC report:

    image
    http://www.whatdoestheinternetthink.net/xbox

    What does the internet think is doing something smart but ostensibly very simple – it is aggregating tweets in order to give a better reflection of the sentiment toward “xbox”. Now imagine if the BBC reporter that wrote the above article had chosen to measure the sentiment before and after the announcement, would that not have given a better reflection of the reaction from social media rather than a few random tweets? Its certainly more interesting and newsworthy to me, I couldn’t care less about the opinions of individuals that I have never met but measuring the overall sentiment – well that means something. Moreover, I’d like to know if the announcement has affected that sentiment positively or negatively. As any data analyst (or BI guy) will tell you its not the numbers themselves that are important, its whether they are trending up or down that matters more. Now what I would love is a service that showed you the trend of sentiment on social media over a given period of time so that I could look back historically; actually, collecting that data seems like a great use case for a cloud-based ETL tool, I wonder if I could build it with Datasift?


    Interestingly in between my writing the bulk of this blog post and then hitting publish on it I stumbled across How The Xbox Reveal Affected Microsoft's Flagging Reputation on Forbes.com. In it the author, Haydn Shaughnessy, opines:

    At the time of the reveal I … said Microsoft had a store of public trust that could help develop the Xbox One market, and should be nurtured across its other brands. Microsoft has managed the Xbox reveal very well, despite criticism the following day and more recently. Optimism and the reputation for innovation have been enhanced

    How did Mr Shaughnessy arrive at this conclusion? He did exactly what I suggest above, he aggregated and analysed sentiment from social media:

    I arrived at that conclusion by examining big data around Microsoft sentiment. The data is drawn from  tens of thousands of news and social media sources and is filtered by “emotion”.

    He also used a graph to show the trend over time. As they say, a picture tells a thousand words:

    microsoft-optimism-to-end-may[1]

    The chart depicts an uptick of optimism toward Microsoft after the XBox One announcement.

    Take a read of the article for more insights, it really is very interesting. I’m very impressed with what Mr Shaughnessy and Forbes have done here. They sought out evidence based on sentiment then analysed it to draw a conclusion rather than deferring to perceived popular opinion. The credibility of Forbes has risen in my consciousness (and perhaps with anyone reading this blog post too) and I’ll probably seek out Forbes articles in the future. Certainly in preference to the BBC anyway.

    @Jamiet

  • Using SnagIt to take screenshots when using a Surface

    Update, since I wrote this post an update to Surface means that simply pressing Fn + <Spacebar> fires up SnagIt. Job done.

    I recently bought a Surface Pro which so far I’m really enjoying however one thing one thing I’ve been having hassles with is taking screenshots for my blog posts. I’m a massive fan of TechSmith’s SnagIt for taking screenshots however I’d always used it simply by hitting the Print Screen button on the keyboard – the Surface keyboard has no Print Screen button so this presented somewhat of a problem.

    I put a shout out on Twitter for keyboard shortcuts that would enable me to accomplish the same thing as pressing the Print Screen button. The suggestions that came back can be surmised as:

    • Run the on-screen keyboard and hit the print screen button there.
    • Use Windows’ built-in snipping tool (which has existed since windows Vista)
    • OneNote has a screenshot facility that can be accessed by pressing Win+S
    • Hardware buttons. Hold the Windows button on the front of the Surface and press the volume down button (thanks to SQLGardner in the comments for reminding me of this one)

    The on-screen keyboard works fine and does indeed launch the SnagIt screen clipper exactly the same as pressing the print screen on a real keyboard would do however I found it a bit fiddly as you have to go and open the keyboard, then close it again afterwards. Moreover the on-screen keyboard is, well, on the screen – and you don’t really want that if you’re taking screenshots. Windows’ snipping tool looked like a great option at first until I realised that it can’t be used to take screen clippings of modern (aka Metro) apps – it simply doesn’t work for those and that was a showstopper for me. The OneNote option is a good one especially as it has a very handy “copy to clipboard” button that appears once the screenshot is taken, however it does of course require you to have OneNote installed and that aint cheap! The hardware buttons option is quick and easy and great if all you want is a screenshot of the entire screen but it doesn’t let you pick a portion of the screen and it doesn’t launch SnagIt.

    That surmises some options for taking screenshots however only one of them allows me to carry on using SnagIt (which was my ultimate aim here) and that was the rather fiddly on-screen keyboard option. That’s when I remembered that SnagIt allows you to customise the hotkey that is used to take a screenshot. The default hotkey is (as explained in the first paragraph above) the print screen button so all I did was go and change that default – I changed it to CTRL+ALT+P which, to my knowledge, isn’t used by anything else in the OS. This seems to be working fine, so far. here’s a screenshot taken just now on my Surface, using SnagIt, with some appropriate blur effects applied using Snagit’s editor:

    image

    So there you have it, some options for taking screenshots on a Surface. As I said I’m a big fan of SnagIt but if you do not use SnagIt then i have outlined some other options in the bullet points above.

    One final thing, if you *do* use SnagIt then Techsmith have made available a useful video that explains how to use it on Windows 8 at http://www.techsmith.com/snagit.html although this doesn’t cover what to do if your keyboard doesn’t have a print screen button, hence this blog post Smile 

    Hope this helps.

    @Jamiet

    UPDATE, Whaddya know. Just 4 days after I write this and Microsoft announce you'll soon be able to take a screenshot using just the keyboard. Fn + <spacebar> will do the job apparently! I doubt this will trigger SnagIt for me, but I live in hope!

    Another UPDATE, Fn + <spacebar> does trigger SnagIt. Happy days! :)

  • Stored procedure debugging in SSDT and LocalDB

    Stored procedure debugging is a feature that has existed in SQL Server Management Studio (SSMS) for some time (since before SQL Server 2005 I believe) yet in my experience not many folks make use of it, indeed many folks aren’t even aware it exists. In this post I shall try and shine a light on what I believe is an underused feature in the SQL Server toolset.

    When I have attempted to use debugging within SSMS in the past I have invariably run into problems. I can remember as recently as December 2010 trying to debug a stored procedure on a remote server and it required ports to be opened up on the firewall and other such hoopla – that’s not a pleasant experience when all you want to do is hit F10 and start stepping through your code. Happily in SQL Server 2012 the feature showed up in SQL Server Data Tools (SSDT) which in short is a Visual Studio-based tool aimed squarely at developers rather than DBAs and the advantage this brings is something called LocalDB. LocalDB is quite simply a free, lightweight edition of SQL Server that gets installed with SSDT, lives on your local box, and whose intended use is to help you write code (I do need to blog about LocalDB in more depth at some point). Due to LocalDB being on your local machine debugging T-SQL code there is its as close as its going to get to “it just works”.

    Let’s take a look at how this happens, I’m going to demonstrate by debugging stored procedures in an open source project I’m working on called SSIS RestartFramework, the source code (currently) lives at http://ssisrestartframework.codeplex.com/SourceControl/latest in case you want to download it and follow along at home.

    As with many features in SSDT the debugging feature is accessed via SQL Server Object Explorer (SSOX). From there one can right click on a stored procedure and select “Debug Procedure…”:

    image

    Selecting that option displays a dialog where one specifies values for the parameters of the stored procedure:

    SNAGHTMLd5abc

    Clicking OK launches a query window with some scaffolding code that calls the stored procedure with the specified values:

    image

    From there one can step through the code, executing each line in turn, using F10 & F11 just as if this were .Net code. Notice how, in the screenshot below, we are stepping through the code of the [RestartFramework].[PutETLJob] stored procedure which we stepped into (using F11) from the scaffolding code above.

    image

    Furthermore this screenshot depicts the Locals pane displaying the value of all in-scope variables and also the very cool ability to pin a variable (in this case @vRowCount) so that it “floats”, making it easy to see the value and also to see it changing.

    I should point out that you do not need to have your stored procedures in LocalDB in order to debug them using SSDT, you can connect to any instance of SQL Server from SSOX and then launch the debugging experience.

    This has a been an overview of the T-SQL debugging features in SSDT. Let me know if it proves useful to you.

    @Jamiet

  • Increment a variable in the SET clause of an UPDATE statement [T-SQL]

    There’s a very nifty feature of T-SQL that, in my experience, not everyone knows about but is handy for generating surrogate key values as part of an ETL routine. I myself had to ask on Twitter what the syntax was because I couldn’t remember it (thanks to Matija Lah for reminding me) so I think its worth blogging in case I forget it again.

    To summarize the feature, you can use a variable in the expression of the SET clause in an UPDATE statement while updating it at the same time. The syntax is:

    @variable = column = expression

    where expression can include @variable.

    It will probably crystalise in your brain if you see it in action, so here is a little demo:

    /*Create a table and populate with data*/
    DROP TABLE [#DimProduct]; --will fail first time you run the script, so just run it again
    CREATE TABLE [#DimProduct] ([ProductId] INT, [ProductCode] VARCHAR(10));
    DECLARE @i INT = 0;
    INSERT [#DimProduct]([ProductId],[ProductCode])VALUES(1,'qwerty'); /*Simulating an existing record*/
    WHILE @i <10
    BEGIN
      
    /*Simulating some new rows for which we want to generate a a surrogate key*/
      
    INSERT [#DimProduct]([ProductCode])VALUES(REPLACE('qwerty@i','@i',@i));
      
    SET @i += 1;
    END
    /*Take a look at the data*/
    SELECT * FROM [#DimProduct]

    /*Generate surrogate keys using an UPDATE*/
    DECLARE    @MaxSurrogateKey    INT = (
                  
    SELECT  ISNULL(MAX([ProductId]),0)
                  
    FROM    [#DimProduct]
    );
    UPDATE [#DimProduct]
    SET     @MaxSurrogateKey = [ProductId] = @MaxSurrogateKey + 1 /****The clever bit****/
    WHERE  [ProductId] IS NULL;

    /*Take another look at the data*/
    SELECT * FROM [#DimProduct]

    Here’s the output:

    image

    Notice that our UPDATE statement has generated contiguous surrogate key values for all rows that previously had a NULL surrogate key. Pretty neat huh? Well, I think so anyway!

    @jamiet

  • AdventureWorks on Azure now hosted by Red Gate software

    In March 2012 I launched an initiative called AdventureWorks on Azure in which I hosted the AdventureWorks2012 sample database on Azure so that the SQL Server community at large could use it to get familiar with what SQL Azure Windows Azure SQL Database had to offer. I asked the SQL community to support the initiative by donating money to keep it afloat and that community responded in spades to the point where last month I was able to make a donation to the War Child charity on behalf of that community. In the interim period I added more sample data to the pot, made Northwind available, and also data related to Olympics medal tables down the years.

    Today AdventureWorks on Azure takes on a new guise, Steve Jones of SQL Server Central fame announced via a press release that hosting of the database is now to be taken on by his employer Red Gate Software. Here’s what Steve had to say in today’s announcement:

    One of the goals of Red Gate Software is that we not only sell products to the SQL Server community, but that we also participate in the community. In line with our goals of participating in the community, we have agreed to host the AdventureWorks 2012 sample database on the Azure platform.
    Red Gate has worked to transition this set of databases to our own subscription and will assume the financial responsibility for making this available for the community to use.
    http://www.sqlservercentral.com/articles/Adventureworks/99290/

    This is great news. I no longer have to worry about finding the funds to keep the initiative afloat and the community can feel comfortable that this will be around for the foreseeable future, so thank you to Steve and red gate for this undertaking. Special thanks must also go to David Atkinson at Red Gate who first suggested that they take this on.

    I have high hopes that Steve & Red Gate will build upon this with more offerings for the community and equally I also hope I can stay involved somehow. Watch this space!

    @Jamiet

  • Reflections on SQLBits XI

    Its been a couple of weeks since SQLBits XI happened in Nottingham and I thought I’d jot down a few thoughts for posterity.

    First the venue. I think its fair to say that the overall consensus was that the East Midlands Conference Centre was the best SQLBits venue these has been so far – I’ve been to more than half of them and based on my experience I would agree with that sentiment. The hotel especially was top quality – I was pleased with my room and the breakfasts were way better than the Travelodge standards I’ve become accustomed to at such events. Perhaps the great weather over the weekend helped lift the spirits but I’d say the bar has been set high, I hope future SQLBits conferences are at similar standard venues.

    You may have noticed that many of my blog posts over the past year have been related to SSDT and my SQLBits sessions this time around followed that trend. I delivered a session on the Friday, jointly with Craig Ottley-Thistlethwaite, entitled “Real World SDDT” and also delivered my first ever day-long pre-conference seminar on the Thursday entitled “SSDT Database Projects from the ground up”. Delivering a pre-con was slightly petrifying and I spent a large part of the three months previous preparing for it; I hope it was worth it. I had 30 attendees which I was delighted with (especially given this was my first pre-con) and we established that the furthest anyone travelled was from Romania, though I’ll assume it wasn’t just so he could come to my pre-con Smile. I had some pleasant feedback via Twitter afterwards from some of the attendees and I’m crossing my fingers that the official feedback is in a similar vein.

    The session I did with Craig went fairly well I thought. I did the first 30minutes where I covered the basics of SSDT deployment from a high level before Craig ratcheted the complexity up a few notches by demoing some interaction between SSDT, MSBuild & Git – really great stuff for those who like to get into the nuts and bolts of this stuff. I was delighted that Craig was willing to do the session with me (in fact it was his idea) as this was his debut public speaking gig and I’m hoping its given him the desire to do more in the future.

    On the Saturday my wife’s uncle, John Milne, came to the conference. John has been working in customer service for years but of late has decided that he wants a change of career and to that end has been studying an Open University course in IT. He told me he had particularly enjoyed the database-focused modules of his course and hence I suggested he come to the free Saturday of SQLBits to try and get a flavour of what the industry is all about and perhaps learn about some real-world experiences to add to his academic travails. By the end of the day John told me he’s had a fantastic time, learned a lot, and was hooked. Mission accomplished I’d say. John lives in Leeds so I introduced him to Chris Testa-O’Neill who helps to run the Leeds user group and John should be going along to some user group events in the near future – if you happen to meet him there please welcome him into the fold!

    All that remains for me to say is a massive thank you to the SQLBits committee who do such an amazing job, all voluntarily, in putting this all together. Thank you Simon Sabin, James Rowland-Jones, Chris Webb, Darren Green, Allan Mitchell, Tim Kent, Chris Testa-O’Neill & Martin Bell. I also want to thank all of the volunteer SQLBits helpers that worked tirelessly on the weekend to make sure the whole thing ran smoothly.

    Bring on SQLBits XII!

    @Jamiet

  • Get the SQLBits agenda on your phone, now and forever

    Regular readers of my blog might have realised that I am a huge advocate of subscribable calendars and the data format that underpins them – iCalendar. On 8th Feb 2012 I wrote a blog post entitled SQLBits now publishing all SQLBits agendas as an iCalendar where I told how the SQLBits committee had published the agenda of the forthcoming SQLBits conference in iCalendar format allow with instructions of how one could view the agenda on their phone. Back then I said:

    …any changes to the SQLBits agenda (e.g. a room change) will automatically flow to your own calendar service and if you have that calendar service (e.g. Hotmail Calendar, Google Calendar) synced to your phone then the changes will automatically show up there too … That new SQLBits subscribable calendar lives at http://sqlbits.com/information/SQLBitsCalendar.ashx; note how it is not specific to a particular conference - subscribe to (don't import) that calendar and the agenda for future SQLBits conferences will automatically flow to you too.

    Sure enough I took a look at the calendar on my phone today and saw this:

    image

    (That “Real world SSDT” session at 14:40 is being presented by Craig Ottley-Thistlethwaite and myself by the way. Hope to see you there!)

    This is the value of subscribing as opposed to importing. The agenda for next week’s conference has already flowed to my phone without my having to do anything. This isn’t the same phone that I had a year ago either, by subscribing to it in my Outlook.com (nee Hotmail) Calendar those subscriptions are stored and flow onto any new phone as soon as I type in my credentials.

    I have stated before that I believe subscribable calendars to be a transformative technology and this is why, I only had to subscribe to the calendar once and data that didn’t even exist back then simply flows into my calendar and thus onto my phone. If this interests you then maybe read how I think the same technology could be used to deliver BI data too at Thinking differently about BI delivery. And if you want to subscribe to the calendar yourself go and read the aforementioned blog post, that link again: SQLBits now publishing all SQLBits agendas as an iCalendar.

    @Jamiet

  • Creating your own SQL snippets in SSDT

    SQL Server Data Tools (SSDT) has a neat feature where you can add snippets into your scripts via the right-click context menu:

    image

    I’m finding it very useful indeed. The same feature exists in SQL Server Management Studio (SSMS) as well by the way:

    image

    One thing I really wanted to be able to do was create my own snippets for SSDT and I ventured to the SSDT forum to ask if it was possible. Turns out the answer is “yes” and Gert Drapers replied to my thread by providing an excellent run through of how to do it. What Gert’s post didn’t quite clarify is that if you follow his instructions then you have to manually edit the .snippet file that you created so that the language is set to SQL_SSDT:

    image

    (well, I had to do that anyway)

    Once you do that you’ll be able to import the snippet into Visual Studio (Gert’s post shows you how) and thereafter your snippets will show up in the snippets menu:

    image

    Very handy indeed.

    @Jamiet

  • Data Warehouse modelling deliberations – foreign keys and unknown members

    Earlier today I posted the following question on Twitter:

    image

    Foreign keys in a data warehouse. Yes or no? This discussion is looming at work, i know my position but am interested in what others think.
    (https://twitter.com/jamiet/status/324215986729385986)

    Specifically, I wanted to know if people were in favour of creating foreign keys from their fact to to their dimension tables or not.

    To say it prompted a good response is a bit of an understatement, 38 responses so far and counting. Here are some of those responses:

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    It certainly seems to be an emotive subject and its clear (to me) that there’s no correct answer, just lots of opinions. That’s a good thing. The majority of responders appeared to be of the opinion that a data warehouse should contain foreign keys and that is my position too. In this blog post I want to outline why I believe that one should create foreign keys from a fact table to its dimension tables:

    Maintain integrity

    Of course, this is the main reason why foreign keys exist – to protect the integrity of your data. I see no reason not to use them for this purpose in a data warehouse. The main argument that I see going against is that with a sufficiently robust ETL solution it shouldn’t be necessary. That is true but I would counter with “how do you know that your ETL solution is suitably robust?” I don’t think its possible to anticipate every eventuality that may arise and for that reason I like the safety net that foreign keys provide. I liked Devin Knight’s response here, foreign keys breed confidence.

    Communicate business logic

    When I join a project that has a database in place the first thing I do is try and understand the data model – to do that I go and look at the foreign keys in that database. Understanding the dependencies between entities is crucial in any data model and the best means of communicating those is via foreign keys. If I encounter a database that is bereft of foreign keys then my heart sinks a little.

    Performance

    Foreign keys can, in some circumstances, be beneficial in improving query performance. Take a read of A Check and Foreign Key Constraint Improves Query Performance by Sarvesh Singh or Do Foreign Key Constraints Help Performance? by Grant Fritchey.

    Code Generation

    I am a big fan of generating ETL code where possible and foreign keys can be invaluable when doing so.

     

    As I said there is no correct answer here so if you have any opinions, either agreeing or disagreeing, I look forward to reading your thoughts in the comments below.


    Mark Stacey’s comment prompted an interesting digression into talking about surrogate keys for denoting unknown members and this is something I have strong opinions on too:

    SNAGHTML1ce05180

    In short, I don’t like the practice of using “–1” as the surrogate key for an unknown member. My reasoning is simple, I don’t like giving meaning to something that is supposed to be meaningless. How then should we indicate which is the unknown member? I propose a single-row table that includes a column for each dimension table, each with a foreign key to the unknown member in the respective dimension table.

    Moreover I don’t like the practice of starting surrogate key counters from 1; the first value available for the integer datatype in SQL Server is –2147483648  so why not start from that?

    I discuss both of these issues in much more depth at Considering surrogate keys for Unknown members.

    Again if you have any thoughts on these subjects please put them in the comments. If nothing else I find it both fun and educational to debate this stuff.

    @Jamiet

    Update, Chris Adkin posted a comment below that contained a link to Microsoft's own guidance on building datawarehouses where it is stated:

    "Many physical designs for data warehouses follow the star schema but do not completely specify the relationships between the fact and dimension tables, as mentioned earlier for foreign key constraints, for instance. Without the foreign key constraints explicitly specified, SQL Server must depend on heuristics to detect star schema query patterns."

    Chris' take on this:

    "So, assuming we are talking about a Kimball DW, there is most definitely value in using foreign key contstraints as this provides a fail safe for the heuristics getting it wrong."

    Thanks Chris. 

  • Geoflow? Is that it, Microsoft?

    I didn’t travel to the SQL Pass Business Analytics conference this week but I keenly followed what was going on via the #passbac hashtag on Twitter. Seemingly the big announcement was Geoflow Preview for Excel 2013, an add-in for Excel that visualises data over space and time (read more at Public preview of project codename “GeoFlow” for Excel delivers 3D data visualization and storytelling). Geoflow certainly looks compelling at first glance though I must say I found it rather strange that it got top billing given that Microsoft were talking about it at some SharePoint conference five months ago but nonetheless the keynote demo was apparently very impressive indeed. Here’s a screenshot of Geoflow:

    I think Geoflow looks great, I really do; the questions I immediately had about it were:

    • Can I share my Geoflow’d Excel workbooks and have people view them on SharePoint? Answer: No, there’s no SharePoint collaboration story.
    • Is Geoflow part of Power View? Answer: No, its a separate installation.
    • OK so I have to install it. I presume then that its available in the Office App Store given that’s the new model for distributing Excel add-ins? Answer: No, you have to download it from Microsoft’s download site.

    So the big reveal from Microsoft at this conference was an Excel add-in that does something very very cool but had already been announced, is only a preview, doesn’t fit with Microsoft’s BI collaboration strategy, doesn’t use their modern distribution platform and isn’t part of their Excel-based data visualisation tool. Well thank god I didn’t stump up the cost of travel, accommodation, loss of income and time away from the family for that! Doubtless there were a lot of other good reasons to go to the conference but I would have been going with high expectations of news from Microsoft that is going to be compelling and help me sell Microsoft’s BI offering to my clients – Geoflow doesn’t do that, not by a long chalk.


    What I was hoping for, nay expecting, was a concrete announcement regarding Microsoft’s mobile BI strategy. We first saw Power View demonstrated on an iPad at the PASS 2011 conference and I assumed that in the intervening eighteen months they might have built something we could actually install and play around with. Apparently not.

    Microsoft are getting killed in this area. At my current client all the management folk walk around with iPads glued to their hands – this is the tool on which they consume information and Microsoft doesn’t have anything for them. I was working for a client two years ago that had just invested in a product called RoamBI  because it enabled them to view Reporting Services reports on an iPad. Two years ago for pity’s sake, and Microsoft haven’t released anything mobile-BI-related since!

    @Jamiet

  • SSIS gotcha – Regional Settings can affect your expressions

    I recently stumbled across a nuance of the SSIS expression language which, when you think about, kinda make sense – but it does help to be aware of it. Its concerned with casting of datetime values using the SSIS expression language

    Take the following expression:

    (DT_WSTR,30) @[System::ContainerStartTime]

    That expression casts a datetime value into a string value. If I evaluate that with my OS Regional Settings set to English (United Kingdom) I see this:

    image

    If I set my OS Regional Settings to English (United States) I see this:

    image

    Note how that simple change to the regional settings has caused the result of my expression to change. This could have dangerous consequences; for example, if you are using the result of this expression in a dynamically built SQL statement (as I was) then one of two things will happen, either you will get the wrong result or you’ll get an error. Observe how, n my dynamically built SQL statement, I’m CONVERTing a string literal (which is constructed using the above expression) to a datetime value:

    image 

    however with a simple change of my regional settings to English (United States) I see this:

    image

    and when you run that particular SQL statement in SSMS:

    image

    it blows up!

    Definitely one to be aware of! Watch those Regional Settings and their affect on casting of dates in the SSIS expression language!

    What should you do instead?

    If you need a failsafe way of constructing a date that doesn’t rely on Regional settings then consider something like the following:

    (DT_WSTR,4)YEAR( @[System::ContainerStartTime] ) + "-" +
    RIGHT("0" + (DT_WSTR,2)MONTH( @[System::ContainerStartTime] ), 2) + "-" +
    RIGHT("0" + (DT_WSTR,2)DAY(@[System::ContainerStartTime] ), 2)

    That expression will build a date string with format YYYY-MM-DD (which is the ISO-ratified unambiguous way of representing a date) regardless of Regional Settings:

    image

    @Jamiet

  • AdventureWorks on Azure donates GBP351.49 to War Child. THANK YOU!

    Just over 12 months ago I published a blog post entitled AdventureWorks2012 now available for all on SQL Azure. In it I explained that I had set up a Windows Azure SQL Database (then known as SQL Azure database) for the SQL Server community to use and hence familiarise themselves with the SQL Azure offering – I called this initiative “AdventureWorks on Azure”. Judging by the comments that were left in the table that I set up for that very purpose it seems that a number of people were in favour and made use of it:

    image

    The credentials to enable you to connect up and leave your own comment are in that same blog post!

    Running a SQL Azure database is of course not free hence I asked for the community to support the initiative by contributing via PayPal to its upkeep. I hoped that there might be enough left over to make a charitable donation and as you have probably gathered from the title of this blog post I’m delighted to be able to say that that did indeed happen. A few minutes ago I made a contribution to War Child (the same charity that the SQL Server MVP Deep Dives book donated to) of £351.49

    image

    What do War Child do? In their own words:

    We look forward to a world in which children's lives aren't torn apart by war.

    There's 27 of us in an old false-teeth factory in north London.
    We're trying to change the world.

    You could say we're a small charity with big ambitions.

    We're directly transforming the lives of tens of thousands of children. And we're campaigning to improve the lives of millions more.

    Thank you to everyone that made a donation especially to the extremely generous person (who shall remain nameless – you know who you are) that contributed $105.


    I would like to keep AdventureWorks on Azure going for another year at least but to do that I need more donations. If you would like to support this initiative for another year take a read of the instructions below that I have copied from last year’s blog post:

    AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to adventureworksazure@hotmail.co.uk:

    Any amount, no matter how small, will help.

    Thank you once again to everyone that donated!

    @Jamiet

  • Calendar as a canvas for getting things done

    In the same way that a map is a natural choice for displaying data with a geography element to it I’ve long opined that a calendar is a natural choice for displaying data with a time element to it, my main output on this topic is at Thinking differently about BI delivery.

    With that in mind I recently watched an interesting talk by James Whittaker entitled A New Era of Computing where he opined that that the era of searching and browsing for information is dying and we are now moving into an era of doing; its an interesting talk and if you have an hour to spare it might well be worth watching, you can find a video of the talk here: http://channel9.msdn.com/Events/ALM-Summit/ALM-Summit-3/A-New-Era-of-Computing

    James gave an example of using a calendar as a canvas for booking a holiday and it really struck a chord with me. In this hypothetical example the steps of finding an appropriate time for a holiday, clearing the decks of all other appointments, finding flights and activities….they were all done within the context of a calendar. There were no 3rd party apps involved, no web pages – the calendar was the canvas upon which all of these tasks were done. To someone like me who strongly believes that calendars are massively underutilized as a means for displaying information the notion that a calendar could also be used to get stuff done was both liberating and illuminating.

    image

    If, like Scott Adams and I, you agree that calendars are criminally undervalued in an era of information discovery then you could do a lot worse than spend an hour watching James’ presentation. Really thought-provoking stuff.

    @Jamiet

  • Detecting Database Drift using SSDT

    One of the nice things about doing user group presentations is that when you’re putting the presentation together you invariably learn about features that were previously unbeknown to you; so it proved as I stumbled upon SSDT’s database drift detection features while researching material for my forthcoming pre-conference seminar SSDT from the ground up.

    What is database drift?

    You have probably experienced database drift, you just didn’t happen to refer to it as that. More likely you might have spluttered the following, perhaps sprinkled with a few expletives:

    • “Who put these tables in my database?”
    • “Who changed this view definition?”
    • “Why is this guy in db_owner?”
    • “Where has my stored procedure gone?”

    In other words database drift can loosely be described as

    stuff that appears, gets removed, or gets modified in your production databases that perhaps shouldn’t be

    Detecting database drift using SSDT

    If you’re using SSDT to manage your database schema then you probably consider the source code in your SSDT projects to be “the truth” and hence anything that appears in your databases that is not in your source code would be considered database drift.

    In order to detect database drift using SSDT you must ensure that your database is registered as a Data-Tier Application. This can be done when you publish your database project (i.e. dacpac) by selecting “Register as a Data-tier Application”:

    SNAGHTML450c41

    Thereafter you can check for database drift on subsequent publishes by selecting “Block publish when database has drifted from registered version”:

    SNAGHTML46b2db

    If you check that box and database drift has occurred then the publish operation will fail and you see an appropriate message in the Data Tools Operations pane, “Publish stopped. The target database has drifted from the registered version.”:

    SNAGHTML4bd20f

    Clicking the View Report hyperlink displays the Drift Report which is represented in an XML file:

    <?xml version="1.0" encoding="utf-8"?>
    <DriftReport xmlns="http://schemas.microsoft.com/sqlserver/dac/DriftReport/2012/02">
      <Additions>
        <Object Name="[View_1]" Parent="[dbo]" Type="SqlView" />
        <ExtendedProperty HostName="[View_1]" HostParent="[dbo]" HostType="SqlView" Count="2" />
      </Additions>
      <Removals />
      <Modifications />
    </DriftReport>

    In this case a view called [dbo].[View_1] has been added to the target database. That view did not exist in the dacpac that was most recently deployed against the database thus the publish operation fails. Keeping one’s deployed databases as “clean” as possible is something that I am all in favour of so personally I think this is a pretty cool feature.

    Generating a drift report from the command-line

    The drift report can be generated by the command-line tool sqlpackage.exe. To do so you need to define:

    • the action to be DriftReport
    • a target server & database
    • an output file

    >SqlPackage.exe /A:DriftReport /tsn:"(localdb)\Projects" /tdn:"Database1" /op:DriftReport.xml

    SNAGHTML57a506

    As far as I know there is no support for generating a drift report from SQL Server Management Studio (SSMS). I’m hoping that changes so that this feature gets more visibility.

     

    If you have any comments stick them in the comments section below!

    @Jamiet

  • SSDT naming confusion cleared up. Somewhat.

    In March 2012 I published SSDT - What's in a name? where I lamented the mistakes that Microsoft made in the naming of SQL Server Data Tools.

    …official documentation stating that SSDT includes all the stuff for building SSIS/SSAS/SSRS solutions (this is confirmed in the installer, remember) yet someone from Microsoft tells him "SSDT doesn't include any BIDs components".

    I have been close to this for a long time (all the way through the CTPs) so I can kind of understand where the confusion stems from. To my understanding SSDT was originally the name of the database dev stuff but eventually that got expanded to include all of the dev tools - I guess not everyone in Microsoft got the memo.

    Since then I’ve seen lots of questions pertaining to SSIS/SSAS/SSRS being posted on the SSDT forum on MSDN which, frankly, is the wrong place for them.

    With the release of the SSIS/SSAS/SSRS project templates for Visual Studio 2012 Microsoft have attempted to clear up the confusion. Matt Masson from the SSIS product team attempts to explain in his usual jovial way:

    note that the component was renamed – we added “Business Intelligence” to the end to distinguish it from the SQL Server Data Tools (Juneau). We now refer to it as SSDTBI, rather than “SSDT, no, not that one, the other one – you know, the one that comes with the SQL installation media, not the one you download”.
    http://www.mattmasson.com/2013/04/installing-ssis-for-visual-studio-2012/

    So to clarify, its now:

    • SSDT – for building databases
    • SSDTBI – for building SSIS/SSAS/SSRS solutions

    Got it? Good!

    That may all seem slightly confusing but its a darn sight clearer than it was SQL Server 2012 was released over a year ago. And if nothing else you have to be amused with Microsoft’s penchant for ever-lengthening acronyms, the last six-letter-acronym I can remember seeing was BODMAS!!!

    @Jamiet

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement