THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a freelance data mangler in London

  • Thoughts on Power BI for Office 365

    Three months ago I published a fairly scathing attack on what I saw as some lacklustre announcements at Microsoft’s BI conference. In that blog post, Geoflow? Is that it, Microsoft? I opined that their announcement of Geoflow wasn’t particularly earth-shattering and what they really should have been talking about was a mobile BI story. Apparently I wasn’t the only one who thought that too judging by the majority of the 52 comments that that post received. Here are a few few choice quotes:

    Agree wholeheartedly JT.  Given Sharepoint is the BI collaboration tool I would expect Geoflow to be integrated. I am also not a fan of separate tools for everything.  PV + Geoflow would be good/Better. – Allan Mitchell

    Given the hype, I'd have expected a lot more from MS on spatial visualization. Between pie charts on maps in PowerView, no HTML5 support for PowerView, and the whole "PowerView can't be used with the other SharePoint components, it only really works fullscreen and can't accept parameters" story, I really think MS has lost their way on this. – Mark Stacey

    Two weeks after the Data Explorer release, I did a demo at a SSUG with "teaser trailers" of GeoFlow and machine learning. I mentioned #passbac and boldly voiced that I expected mobile to be announced because I was so sure that PASS could not launch a new conference without an announcement of this scale and impact ... GeoFlow does not measure up. It's a "cool" product, but doesn't help the MS platform anywhere close to the gaping hole called mobile. – Toufiq Abrahams

    In the time it's taken Microsoft to release nothing useful that's mobile BI related (Geoflow looks cool but its practical use is within the BI stack is virtually nil) and at the same time manage to piss off the Excel community by effectively restricting the adoption of PowerPivot/Power View, most of the competition have not only released BYOD-friendly mobile BI apps but the most forward thinking have completely re-written their offerings in HTML5 (QlikView .Next for example). So still we wait, and at every pre-sales meeting where the "Can our execs use MSFT BI on their iPads?" question comes up (80% of the time now), I have to shift uncomfortably in my seat and say "No, and what's more, even as a MSFT partner, we have no clue when we will be able to say Yes!" Shambolic. – Will Riley

    And so having spent 2 years pushing the SharePoint message, despite huge reservations from the partner community, here we have a release of a front end tool that isn't supported through SharePoint. You want to put the fun back into BI? It might be fun producing flashy but ultimately pointless demos for serial conference attendees but it certainly isn't fun trying to position a Microsoft BI strategy to enterprise customers. And how many different but overlapping tools do we need? I now need two monitors side by side to view my Excel ribbon. CONSOLIDATION please. – Mick Horne

    Another Partner here agreeing that it does not help us sell a coherent enterprise BI strategy.  This is what we need to see from MS far more than yet another tool. – Calvin Ferns

    You get the idea!

    Well, better news cometh. Three months after that post and Microsoft have finally announced something that is intended to alleviate many of the concerns expressed above – its called Power BI for Office 365 and it was announced at Microsoft’s Worldwide Partner Conference. Power BI for Office 365 brings together the existing disparate tools PowerPivot (now renamed Power Pivot), Power View, Data Explorer (now renamed Power Query) and Geoflow (now renamed Power Map) into a single offering, incorporates an online collaboration portal and also (finally) mobile apps for iPad and Windows 8/RT (no word on Android or Windows Phone yet).

    As a BI/data integration practitioner in the Microsoft space I’m excited about this, I really am. I’ve had some illuminating Twitter conversations in recent days with people that clearly have a different opinion, and that’s OK. By the end of this post I hope I’ve explained exactly why I’m excited about Power BI.

    Branding and focus

    The first important take away is that they’re coalescing under a single moniker, a single brand – “Power”. Power BI, Power Pivot, Power Query, Power View, Power Map (not Power Maps as some commentators would have you believe). Whether or not you like the word and its connotations its great to see consistency in the naming. Also, as confirmed by Kasper de Jonge, there is now a space between all the words (i.e. Power Pivot not Powerpivot) which I think is also a good thing – consistency is important.

    Above all I sense some focus around a coherent offering here whereas beforehand Microsoft’s BI offering seemed rather rudderless and haphazard – that should be good news to those Microsoft partners  quoted above that lament the current state of affairs. Will Power BI for Office 365 have frustrating limitations and bugs? Sure it will, but I have optimism that it has sufficient priority that those things will be addressed and not left to fester.

    Collaboration

    Most of the articles I’ve seen relating to Power BI up to now seem focused on the four aforementioned Excel add-ins but, to me, the most interesting aspect of this suite is the BI portal portion. Frequent readers of my blog may have noticed a theme running through a lot of my posts down the years, that of sharing pointers to data as opposed to a copy (I hate getting email attachments for example, and I always espouse subscribing to calendars rather than importing them). The BI portal portion of Power BI for Office 365 speaks to this theme by hosting queries that are built in and published from Power Query so that they can be consumed by other people. The obvious point here is that by sharing a query one is not sharing a copy of the data, but actually a pointer to the data and that is very exciting to me. Moreover its only the data that is being pointed at, presentation and further manipulation thereof is left as an exercise for the consumer and again, that’s something I really like. This is not a panacea to the proliferation of data silos (mostly in Excel and Access) that are littered throughout organisations but I am hopeful that its a step in the right direction.

    I am also hopeful that the BI Portal becomes a focal point for related offerings. For example, I would love to see SQL Server Master Data Services (MDS) offered through the BI Portal and here’s an anecdote that explains why.

    Only yesterday on my current project we were discussing a data hierarchy that is an important dimension of the organisation – stock status (my client is an online retailer so stock status is rather crucial). It transpired that the hierarchy that the business used to classify stock status was not actually stored anywhere other than in peoples’ heads and hence if we were to provide data against that stock status hierarchy we would need to store the hierarchy somewhere. This is a SQL Server shop so the idea of using MDS arose but was quickly discounted as the provisioning of new infrastructure was considered too big an undertaking to hold what amounts to a lookup table (let’s be honest, whatever you dress MDS up as its still basically a bunch of lookup tables).

    This is a perfect scenario for MDS and I am hopeful that Microsoft are considering offering MDS as a hosted service because if it were it would have been a no-brainer for us to use – we would not have to provision hardware and do installations, all that would be required would be to walk up with a credit card. MDS as a software-as-a-service (SaaS) offering through the Office 365 BI Portal would be, in my opinion, compelling (my current client is an Office 365 customer by the way). The same applies to Data Quality Services (DQS) as well, that’s another service that would benefit from being offered on a SaaS basis.

    What else might benefit from being offered through a single BI Portal? Well if we look outside the four walls of Microsoft there are plenty of companies that might welcome the opportunity to provide their wares through Office 365 – Tableau, Microstrategy, Business Objects (SAP) and Predixion are some that spring to mind. This is rather future-thinking I admit but I do think there’s real potential here.

    I’ve discussed Power BI for Office 365 with some folks who opine that this Power BI announcement is little more than a renaming of existing Excel add-ins but in my opinion the provision of a BI Portal for collaboration is the big story here.

    Data Governance

    Another big take away (which has not been apparent from some of the early demos but which I have gleaned from early exposure to Power BI) is that data stewardship is a big piece of this puzzle. Data stewards will be relied upon to:

    • grant/revoke access to data sources, reports and queries
    • curate metadata/annotations on the Power Pivot model and Power Query queries in order to drive the Natural Query capabilities and surfacing in Power Query search results
    • Monitor usage via the Data Steward Portal and thus allow stale and/or unused data to be expired
    • Manage data refreshes (including installation of the Power BI on-premises agent that provides refreshes from on-premises data sources)
    • Evangelising the existence and use of data available through Power BI

    amongst other things.

    It is clear to me that in sufficiently large organisations this "data steward" role is going to be a full-time role and, again in my opinion, Power BI implementations could live or die by the diligence of the nominated data steward(s). I view this as a good thing – I’ve seen countless organisations where there is a chronic lack of governance applied to data and consequently data quality suffers as a result – anything that brings the issue of data governance to the fore is a boon. I envisage that in a few years organisations will have a dedicated "Data stewardship" function; data stewardship will become a discipline all of its own and there will be an ecosystem that grows up around this (tooling, forums, training courses, conference tracks etc…). As I alluded, I think this is a good thing – organisations that properly govern their data will be leaders in the digital economy. Its also a great opportunity for people like myself that provide consultancy services – helping organisations to make better use of their data is where I want to add value.

    Microsoft will not of course talk about this explicit need for a Data Steward as it detracts from the self-service mantra that they are espousing – at the end of the day Microsoft want to sell licenses, and I’m fine with that. The community can fill the void here with appropriate messaging and, again, I see this as an opportunity. Power BI itself is not going to solve the data governance problems that proliferate in organisations today (only human beings can do that) but I harbour hopes that it can raise awareness of the issue and enable change.

    Office 365

    Ah, the Office 365 conundrum. If you’re not using Office 365 then there’s no Power BI for you. Yet. Personally this doesn’t bother me, my most recent two clients are both Office 365 customers (as am I) and the notion of SaaS (which Office 365 quite clearly is) resonates with me a lot better than on-premise notions of procuring software/hardware then having to install/configure it all. Here’s my money, now get out of the goddamn way – I much prefer that way of working. Evidently a lot of folks disagree and would rather run their own commodity services like email & IM. I can understand that and if people feel the need to do so, good luck to ‘em – like it or not though its clear that Microsoft are pushing hard on Office 365 and I don’t imagine that changing any time soon.

    HTML5 Power View

    There was a rumour flying around on the day of the Power BI announcement that Power View had been rewritten to use HTML5 instead of Silverlight. Well, that’s partially true, HTML5 variants of HTML5 are coming but until you hear any different Power View in SharePoint and in Excel is still going to be using SilverLight. I’m looking forward to the day when Power View runs everywhere without SilverLight as this presents a great opportunity for Power View to proliferate. I recently tweeted:

    PowerView needs to be ubiquitous. Wherever there's data(SP, PerfMon, SSMS, Dynamics, Eventvwr, etc) should be a "view in PowerView" button
    https://twitter.com/jamiet/status/347449920413900800

    I really hope this happens. Power View is a great ad-hoc visualisation tool – I hope it spreads like a weed throughout Microsoft products. Moreover I hope it spreads through the web too, Microsoft already provide the Excel interactive button that can turn any HTML table into an Excel workbook, how about a similar button that turns any HTML table into a Power View report? (you can have that idea for free, Microsoft. You’re welcome! Smile )

    I have quibbles as well

    Not all that’s been announced for Power BI for Office 365 is quite as I would like it to be. I spoke in glowing terms above about the ability to share Power Query queries via the BI Portal, the downside to this is that the consumer also needs to have Power Query (and Excel) installed – Power Query is both the query builder and the query viewers. In the tech preview of Data Explorer that was made available two and a half years ago that wasn’t the case, queries could be shared and were then available as an OData feed which meant that anything that could consume OData (which, these days, is rather a lot) could make use of it. Indeed I built my own query that made Google Reader subscriber counts available as an OData feed. I am passionate about making data available in a manner that doesn’t require the consumer to be using any particular software so I harbour high hopes that this feature returns to Power BI for Office 365 in the future and Power Query can live in the feeds ecosystem that I believe is coming.

    I’m still not happy that Power Map is a separate thing – I am still of the opinion that it should simply be built into Power View.

    There is currently no developer story, Power BI for Office 365 isn’t extensible. That would change if queries were exposed as OData.

    Conclusion

    That’s my take on what has been announced for Power BI for Office 365. I see this as a more focused approach to BI from Microsoft after years of shooting in the dark (anyone remember ProClarity??). I’m looking forward to exploring this when it becomes available next month.

    @Jamiet

  • Event Time Announcer – simple and easy. Please use it!

    I’m a big proponent of using online services to simplify our lives, that’s why I’m a huge advocate of subscribable calendars (aka iCalendar) which I talk about frequently on this blog.  In a similar vein I’ve come across a service from http://timeanddate.com called Event Time Announcer, the premise is incredibly simple but very useful. Quite simply it allows you to specify a date and time and it will show you what the time will be in various cities in the world at that exact point in time, you can then share those results using a URL. For example following this URL: http://www.timeanddate.com/worldclock/fixedtime.html?msg=webcast&iso=20130709T18&p1=3592&ah=1 shows the start and end time of a meeting that I concocted for various cities around the world:

    SNAGHTML19fe1dfa

    I constantly have to wrestle with meeting invites littered with meaningless (to me) time zone codes like {ET, CET, PST, PDT} so a service like Event Time Announcer could be invaluable, if only people could be persuaded to use it. So, I implore you (and by “you” I mainly mean people in Microsoft), if you’re sending me times in your own time zone please have the decency to provide a link from Event Time Announcer so I can fathom what they actually mean. It would be greatly appreciated!

    @Jamiet

  • sp_ssiscatalog v1.0.4.0 available now with new feature - view most recent failed execution [SSIS]

    Development of sp_ssiscatalog continues apace with a new release today (4th July 2013). In this release I have enabled a new action called ‘exec_failed’ which can be simply described as “Show me the most recent failed execution”.

    The syntax is very simple:

    exec sp_ssiscatalog @action = 'exec_failed'
    exec sp_ssiscatalog @a = 'exec_failed' /*shortened form*/

    That command will dig into the SSIS catalog to find the most recent failed execution and display the same information that would be displayed, by default, for any execution

    image

    You can use the existing parameters @exec_warnings, @exec_errors, @exec_events, @exec_executable_stats, @exec_events_packages_excluded to modify the resultsets that are returned by default and hence if you want to simply view only the errors its

    exec sp_ssiscatalog
                       @action=
    'exec_failed',
                      
    @exec_warnings=0
                      
    @exec_events=0,
                      
    @exec_execution=0,
                      
    @exec_executable_stats=
    0

    image

    The main aim here was to provide easy access to information about the most recent failed execution but I implemented it in such a way that its easy to return information on any execution status hence all of the following can be used:

    exec sp_ssiscatalog @action = 'exec_created'
    exec sp_ssiscatalog @action = 'exec_running'
    exec sp_ssiscatalog @action = 'exec_canceled'
    exec sp_ssiscatalog @action = 'exec_failed'
    exec sp_ssiscatalog @action = 'exec_pending'
    exec sp_ssiscatalog @action = 'exec_ended_unexpectedly'
    exec sp_ssiscatalog @action = 'exec_succeeded'
    exec sp_ssiscatalog @action = 'exec_stopping'
    exec sp_ssiscatalog @action = 'exec_completed'

    Download the goods from here. Steps to install are at Installation Instructions.


    Its worth pointing out that this feature was implemented solely because it was requested by someone going by the name mbourgon. He/she let a comment on my blog post Introducing sp_ssiscatalog v1.0.0.0, I thought it was a great idea so I put it in. if you have an idea for something you’d like to see in sp_ssiscatalog then let me know, I might just put that in too!

    @Jamiet

  • June 2013 release of SSDT contains a minor bug that you should be aware of

    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it.

    I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]:

    image

    The two tables have exactly the same definition, both a have a single column called [Id] of type integer.

    CREATE TABLE [dbo].[Table1]
    (
        [Id] INT NOT NULL PRIMARY KEY
    )

    My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results:

    CREATE PROCEDURE [dbo].[Procedure1]
    AS
        SELECT
    t1.*
        FROM    Table1 t1
        INNER JOIN Table2 t2
            ON    t1.Id = t2.Id
        ORDER BY Id

    Now if I create those three objects manually and then execute the stored procedure, it works fine:

    image

    So we know that the code works. Unfortunately, SSDT thinks that there is an error here:

    image

    The text of that error is:

    Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id].

    Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day.

    Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version.

    Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say!

    Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed?

    @Jamiet

  • Weekend reading: Microsoft/Oracle and SkyDrive based code-editor

    A couple of news item caught my eye this weekend that I think are worthy of comment.

    Microsoft/Oracle partnership to be announced tomorrow (24/06/2013)

    According to many news site Microsoft and Oracle are about to announce a partnership (Oracle set for major Microsoft, Salesforce, Netsuite partnerships) and they all seem to be assuming that it will be something to do with “the cloud”. I wouldn’t disagree with that assessment, Microsoft are heavily pushing Azure and Oracle seem (to me anyway) to be rather lagging behind in the cloud game. More specifically folks seem to be assuming that Oracle’s forthcoming 12c database release will be offered on Azure.

    I did a bit of reading about Oracle 12c and one of its key pillars appears to be that it supports multi-tenant topologies and multi-tenancy is a common usage scenario for databases in the cloud. I’m left wondering then, if Microsoft are willing to push a rival’s multi-tenant solution what is happening to its own cloud-based multi-tenant offering – SQL Azure Federations. We haven’t heard anything about federations for what now seems to be a long time and moreover the main Program Manager behind the technology, Cihan Biyikoglu, recently left Microsoft to join Twitter. Furthermore, a Principle Architect for SQL Server, Conor Cunningham, recently presented the opening keynote at SQLBits 11 where he talked about multi-tenant solutions on SQL Azure and not once did he mention federations. All in all I don’t have a warm fuzzy feeling about the future of SQL Azure Federations so I hope that that question gets asked at some point following the Microsoft/Oracle announcement.


    Text Editor on SkyDrive with coding-specific features

    Liveside.net got a bit of a scoop this weekend with the news (Exclusive: SkyDrive.com to get web-based text file editing features) that Microsoft’s consumer-facing file storage service is going to get a new feature – a web-based code editor. Here’s Liveside’s screenshot:

    I’ve long had a passing interest in online code editors, indeed back in December 2009 I wondered out loud on this blog site:

    I started to wonder when the development tools that we use would also become cloud-based. After all, if we’re using cloud-based services does it not make sense to have cloud-based tools that work with them? I think it does.
    Project Houston

    Since then the world has moved on. Cloud 9 IDE (https://c9.io/) have blazed a trail in the fledgling world of online code editors and I have been wondering when Microsoft were going to start playing catch-up. I had no doubt that an online code editor was in Microsoft’s future; its an obvious future direction, why would I want to have to download and install a bloated text editor (which, arguably, is exactly what Visual Studio amounts to) and have to continually update it when I can simply open a web browser and have ready access to all of my code from wherever I am. There are signs that Microsoft is already making moves in this direction, after all the URL for their new offering Team Foundation Service doesn’t mention TFS at all – my own personalised URL for Team Foundation Service is http://jamiet.visualstudio.com – using “Visual Studio” as the domain name for a service that isn’t strictly speaking part of Visual Studio leads me to think that there’s a much bigger play here and that one day http://visualstudio.com will house an online code editor.

    With that in mind then I find Liveside’s revelation rather intriguing, why would a code editing tool show up in Skydrive? Perhaps SkyDrive is going to get integrated more tightly into TFS, I’m very interested to see where this goes.

    The larger question playing on my mind though is whether an online code editor from Microsoft will support SQL Server developers. I have opined before (see The SQL developer gap) about the shoddy treatment that SQL Server developers have to experience from Microsoft and I haven’t seen any change in Microsoft’s attitude in the three and a half years since I wrote that post. I’m constantly bewildered by the lack of investment in SQL Server developer productivity compared to the riches that are lavished upon our appdev brethren. When you consider that SQL Server is Microsoft’s third biggest revenue stream it is, frankly, rather insulting. SSDT was a step in the right direction but the hushed noises I hear coming out of Microsoft of late in regard to SSDT don’t bode fantastically well for its future.

    So, will an online code editor from Microsoft support T-SQL development? I have to assume not given the paucity of investment on us lowly SQL Server developers over the last few years, but I live in hope!


    Your thoughts in the comments section please. I would be very interested in reading them.

    @Jamiet

  • Want to prove something with data? Aggregate it!

    Recently Microsoft announced that they’re releasing a new XBox which was apparently big news and was reported on at length around the globe. One article in particular on the BBC News website caught my attention because it contained what I thought to be some really bogus reporting

    “from first appearances tech writers were more impressed by what they saw than the public posting on social media.

    Twitter:
    Every time you say Xbox One, you feel like to have to add the word "new" in front of it to avoid confusion.
    @MurraySwe
    The one thing I came away with was that Microsoft has trouble with numbering. @FinleyNathan
    Am I the only one who thinks the new Xbox One looks like a VCR? @dandahlberg

    Now perhaps its true that reaction on social media was negative but the BBC’s attempt to justify that claim by cherry-picking some random tweets from a sea of millions is, in my opinion, misleading lazy and pointless. And I said so:

    image
    https://twitter.com/jamiet/status/337151547953778688

    So here’s an idea. Instead of picking random tweets what if the BBC instead attempted to gauge opinion by analysing the overall sentiment of those tweets instead. Its not even that difficult, there are free services such as What does the internet think that do this for you. Here’s a screenshot I took from that service at the same time as I was reading the BBC report:

    image
    http://www.whatdoestheinternetthink.net/xbox

    What does the internet think is doing something smart but ostensibly very simple – it is aggregating tweets in order to give a better reflection of the sentiment toward “xbox”. Now imagine if the BBC reporter that wrote the above article had chosen to measure the sentiment before and after the announcement, would that not have given a better reflection of the reaction from social media rather than a few random tweets? Its certainly more interesting and newsworthy to me, I couldn’t care less about the opinions of individuals that I have never met but measuring the overall sentiment – well that means something. Moreover, I’d like to know if the announcement has affected that sentiment positively or negatively. As any data analyst (or BI guy) will tell you its not the numbers themselves that are important, its whether they are trending up or down that matters more. Now what I would love is a service that showed you the trend of sentiment on social media over a given period of time so that I could look back historically; actually, collecting that data seems like a great use case for a cloud-based ETL tool, I wonder if I could build it with Datasift?


    Interestingly in between my writing the bulk of this blog post and then hitting publish on it I stumbled across How The Xbox Reveal Affected Microsoft's Flagging Reputation on Forbes.com. In it the author, Haydn Shaughnessy, opines:

    At the time of the reveal I … said Microsoft had a store of public trust that could help develop the Xbox One market, and should be nurtured across its other brands. Microsoft has managed the Xbox reveal very well, despite criticism the following day and more recently. Optimism and the reputation for innovation have been enhanced

    How did Mr Shaughnessy arrive at this conclusion? He did exactly what I suggest above, he aggregated and analysed sentiment from social media:

    I arrived at that conclusion by examining big data around Microsoft sentiment. The data is drawn from  tens of thousands of news and social media sources and is filtered by “emotion”.

    He also used a graph to show the trend over time. As they say, a picture tells a thousand words:

    microsoft-optimism-to-end-may[1]

    The chart depicts an uptick of optimism toward Microsoft after the XBox One announcement.

    Take a read of the article for more insights, it really is very interesting. I’m very impressed with what Mr Shaughnessy and Forbes have done here. They sought out evidence based on sentiment then analysed it to draw a conclusion rather than deferring to perceived popular opinion. The credibility of Forbes has risen in my consciousness (and perhaps with anyone reading this blog post too) and I’ll probably seek out Forbes articles in the future. Certainly in preference to the BBC anyway.

    @Jamiet

  • Using SnagIt to take screenshots when using a Surface

    Update, since I wrote this post an update to Surface means that simply pressing Fn + <Spacebar> fires up SnagIt. Job done.

    I recently bought a Surface Pro which so far I’m really enjoying however one thing one thing I’ve been having hassles with is taking screenshots for my blog posts. I’m a massive fan of TechSmith’s SnagIt for taking screenshots however I’d always used it simply by hitting the Print Screen button on the keyboard – the Surface keyboard has no Print Screen button so this presented somewhat of a problem.

    I put a shout out on Twitter for keyboard shortcuts that would enable me to accomplish the same thing as pressing the Print Screen button. The suggestions that came back can be surmised as:

    • Run the on-screen keyboard and hit the print screen button there.
    • Use Windows’ built-in snipping tool (which has existed since windows Vista)
    • OneNote has a screenshot facility that can be accessed by pressing Win+S
    • Hardware buttons. Hold the Windows button on the front of the Surface and press the volume down button (thanks to SQLGardner in the comments for reminding me of this one)

    The on-screen keyboard works fine and does indeed launch the SnagIt screen clipper exactly the same as pressing the print screen on a real keyboard would do however I found it a bit fiddly as you have to go and open the keyboard, then close it again afterwards. Moreover the on-screen keyboard is, well, on the screen – and you don’t really want that if you’re taking screenshots. Windows’ snipping tool looked like a great option at first until I realised that it can’t be used to take screen clippings of modern (aka Metro) apps – it simply doesn’t work for those and that was a showstopper for me. The OneNote option is a good one especially as it has a very handy “copy to clipboard” button that appears once the screenshot is taken, however it does of course require you to have OneNote installed and that aint cheap! The hardware buttons option is quick and easy and great if all you want is a screenshot of the entire screen but it doesn’t let you pick a portion of the screen and it doesn’t launch SnagIt.

    That surmises some options for taking screenshots however only one of them allows me to carry on using SnagIt (which was my ultimate aim here) and that was the rather fiddly on-screen keyboard option. That’s when I remembered that SnagIt allows you to customise the hotkey that is used to take a screenshot. The default hotkey is (as explained in the first paragraph above) the print screen button so all I did was go and change that default – I changed it to CTRL+ALT+P which, to my knowledge, isn’t used by anything else in the OS. This seems to be working fine, so far. here’s a screenshot taken just now on my Surface, using SnagIt, with some appropriate blur effects applied using Snagit’s editor:

    image

    So there you have it, some options for taking screenshots on a Surface. As I said I’m a big fan of SnagIt but if you do not use SnagIt then i have outlined some other options in the bullet points above.

    One final thing, if you *do* use SnagIt then Techsmith have made available a useful video that explains how to use it on Windows 8 at http://www.techsmith.com/snagit.html although this doesn’t cover what to do if your keyboard doesn’t have a print screen button, hence this blog post Smile 

    Hope this helps.

    @Jamiet

    UPDATE, Whaddya know. Just 4 days after I write this and Microsoft announce you'll soon be able to take a screenshot using just the keyboard. Fn + <spacebar> will do the job apparently! I doubt this will trigger SnagIt for me, but I live in hope!

    Another UPDATE, Fn + <spacebar> does trigger SnagIt. Happy days! :)

  • Stored procedure debugging in SSDT and LocalDB

    Stored procedure debugging is a feature that has existed in SQL Server Management Studio (SSMS) for some time (since before SQL Server 2005 I believe) yet in my experience not many folks make use of it, indeed many folks aren’t even aware it exists. In this post I shall try and shine a light on what I believe is an underused feature in the SQL Server toolset.

    When I have attempted to use debugging within SSMS in the past I have invariably run into problems. I can remember as recently as December 2010 trying to debug a stored procedure on a remote server and it required ports to be opened up on the firewall and other such hoopla – that’s not a pleasant experience when all you want to do is hit F10 and start stepping through your code. Happily in SQL Server 2012 the feature showed up in SQL Server Data Tools (SSDT) which in short is a Visual Studio-based tool aimed squarely at developers rather than DBAs and the advantage this brings is something called LocalDB. LocalDB is quite simply a free, lightweight edition of SQL Server that gets installed with SSDT, lives on your local box, and whose intended use is to help you write code (I do need to blog about LocalDB in more depth at some point). Due to LocalDB being on your local machine debugging T-SQL code there is its as close as its going to get to “it just works”.

    Let’s take a look at how this happens, I’m going to demonstrate by debugging stored procedures in an open source project I’m working on called SSIS RestartFramework, the source code (currently) lives at http://ssisrestartframework.codeplex.com/SourceControl/latest in case you want to download it and follow along at home.

    As with many features in SSDT the debugging feature is accessed via SQL Server Object Explorer (SSOX). From there one can right click on a stored procedure and select “Debug Procedure…”:

    image

    Selecting that option displays a dialog where one specifies values for the parameters of the stored procedure:

    SNAGHTMLd5abc

    Clicking OK launches a query window with some scaffolding code that calls the stored procedure with the specified values:

    image

    From there one can step through the code, executing each line in turn, using F10 & F11 just as if this were .Net code. Notice how, in the screenshot below, we are stepping through the code of the [RestartFramework].[PutETLJob] stored procedure which we stepped into (using F11) from the scaffolding code above.

    image

    Furthermore this screenshot depicts the Locals pane displaying the value of all in-scope variables and also the very cool ability to pin a variable (in this case @vRowCount) so that it “floats”, making it easy to see the value and also to see it changing.

    I should point out that you do not need to have your stored procedures in LocalDB in order to debug them using SSDT, you can connect to any instance of SQL Server from SSOX and then launch the debugging experience.

    This has a been an overview of the T-SQL debugging features in SSDT. Let me know if it proves useful to you.

    @Jamiet

  • Increment a variable in the SET clause of an UPDATE statement [T-SQL]

    There’s a very nifty feature of T-SQL that, in my experience, not everyone knows about but is handy for generating surrogate key values as part of an ETL routine. I myself had to ask on Twitter what the syntax was because I couldn’t remember it (thanks to Matija Lah for reminding me) so I think its worth blogging in case I forget it again.

    To summarize the feature, you can use a variable in the expression of the SET clause in an UPDATE statement while updating it at the same time. The syntax is:

    @variable = column = expression

    where expression can include @variable.

    It will probably crystalise in your brain if you see it in action, so here is a little demo:

    /*Create a table and populate with data*/
    DROP TABLE [#DimProduct]; --will fail first time you run the script, so just run it again
    CREATE TABLE [#DimProduct] ([ProductId] INT, [ProductCode] VARCHAR(10));
    DECLARE @i INT = 0;
    INSERT [#DimProduct]([ProductId],[ProductCode])VALUES(1,'qwerty'); /*Simulating an existing record*/
    WHILE @i <10
    BEGIN
      
    /*Simulating some new rows for which we want to generate a a surrogate key*/
      
    INSERT [#DimProduct]([ProductCode])VALUES(REPLACE('qwerty@i','@i',@i));
      
    SET @i += 1;
    END
    /*Take a look at the data*/
    SELECT * FROM [#DimProduct]

    /*Generate surrogate keys using an UPDATE*/
    DECLARE    @MaxSurrogateKey    INT = (
                  
    SELECT  ISNULL(MAX([ProductId]),0)
                  
    FROM    [#DimProduct]
    );
    UPDATE [#DimProduct]
    SET     @MaxSurrogateKey = [ProductId] = @MaxSurrogateKey + 1 /****The clever bit****/
    WHERE  [ProductId] IS NULL;

    /*Take another look at the data*/
    SELECT * FROM [#DimProduct]

    Here’s the output:

    image

    Notice that our UPDATE statement has generated contiguous surrogate key values for all rows that previously had a NULL surrogate key. Pretty neat huh? Well, I think so anyway!

    @jamiet

  • AdventureWorks on Azure now hosted by Red Gate software

    In March 2012 I launched an initiative called AdventureWorks on Azure in which I hosted the AdventureWorks2012 sample database on Azure so that the SQL Server community at large could use it to get familiar with what SQL Azure Windows Azure SQL Database had to offer. I asked the SQL community to support the initiative by donating money to keep it afloat and that community responded in spades to the point where last month I was able to make a donation to the War Child charity on behalf of that community. In the interim period I added more sample data to the pot, made Northwind available, and also data related to Olympics medal tables down the years.

    Today AdventureWorks on Azure takes on a new guise, Steve Jones of SQL Server Central fame announced via a press release that hosting of the database is now to be taken on by his employer Red Gate Software. Here’s what Steve had to say in today’s announcement:

    One of the goals of Red Gate Software is that we not only sell products to the SQL Server community, but that we also participate in the community. In line with our goals of participating in the community, we have agreed to host the AdventureWorks 2012 sample database on the Azure platform.
    Red Gate has worked to transition this set of databases to our own subscription and will assume the financial responsibility for making this available for the community to use.
    http://www.sqlservercentral.com/articles/Adventureworks/99290/

    This is great news. I no longer have to worry about finding the funds to keep the initiative afloat and the community can feel comfortable that this will be around for the foreseeable future, so thank you to Steve and red gate for this undertaking. Special thanks must also go to David Atkinson at Red Gate who first suggested that they take this on.

    I have high hopes that Steve & Red Gate will build upon this with more offerings for the community and equally I also hope I can stay involved somehow. Watch this space!

    @Jamiet

  • Reflections on SQLBits XI

    Its been a couple of weeks since SQLBits XI happened in Nottingham and I thought I’d jot down a few thoughts for posterity.

    First the venue. I think its fair to say that the overall consensus was that the East Midlands Conference Centre was the best SQLBits venue these has been so far – I’ve been to more than half of them and based on my experience I would agree with that sentiment. The hotel especially was top quality – I was pleased with my room and the breakfasts were way better than the Travelodge standards I’ve become accustomed to at such events. Perhaps the great weather over the weekend helped lift the spirits but I’d say the bar has been set high, I hope future SQLBits conferences are at similar standard venues.

    You may have noticed that many of my blog posts over the past year have been related to SSDT and my SQLBits sessions this time around followed that trend. I delivered a session on the Friday, jointly with Craig Ottley-Thistlethwaite, entitled “Real World SDDT” and also delivered my first ever day-long pre-conference seminar on the Thursday entitled “SSDT Database Projects from the ground up”. Delivering a pre-con was slightly petrifying and I spent a large part of the three months previous preparing for it; I hope it was worth it. I had 30 attendees which I was delighted with (especially given this was my first pre-con) and we established that the furthest anyone travelled was from Romania, though I’ll assume it wasn’t just so he could come to my pre-con Smile. I had some pleasant feedback via Twitter afterwards from some of the attendees and I’m crossing my fingers that the official feedback is in a similar vein.

    The session I did with Craig went fairly well I thought. I did the first 30minutes where I covered the basics of SSDT deployment from a high level before Craig ratcheted the complexity up a few notches by demoing some interaction between SSDT, MSBuild & Git – really great stuff for those who like to get into the nuts and bolts of this stuff. I was delighted that Craig was willing to do the session with me (in fact it was his idea) as this was his debut public speaking gig and I’m hoping its given him the desire to do more in the future.

    On the Saturday my wife’s uncle, John Milne, came to the conference. John has been working in customer service for years but of late has decided that he wants a change of career and to that end has been studying an Open University course in IT. He told me he had particularly enjoyed the database-focused modules of his course and hence I suggested he come to the free Saturday of SQLBits to try and get a flavour of what the industry is all about and perhaps learn about some real-world experiences to add to his academic travails. By the end of the day John told me he’s had a fantastic time, learned a lot, and was hooked. Mission accomplished I’d say. John lives in Leeds so I introduced him to Chris Testa-O’Neill who helps to run the Leeds user group and John should be going along to some user group events in the near future – if you happen to meet him there please welcome him into the fold!

    All that remains for me to say is a massive thank you to the SQLBits committee who do such an amazing job, all voluntarily, in putting this all together. Thank you Simon Sabin, James Rowland-Jones, Chris Webb, Darren Green, Allan Mitchell, Tim Kent, Chris Testa-O’Neill & Martin Bell. I also want to thank all of the volunteer SQLBits helpers that worked tirelessly on the weekend to make sure the whole thing ran smoothly.

    Bring on SQLBits XII!

    @Jamiet

  • Get the SQLBits agenda on your phone, now and forever

    Regular readers of my blog might have realised that I am a huge advocate of subscribable calendars and the data format that underpins them – iCalendar. On 8th Feb 2012 I wrote a blog post entitled SQLBits now publishing all SQLBits agendas as an iCalendar where I told how the SQLBits committee had published the agenda of the forthcoming SQLBits conference in iCalendar format allow with instructions of how one could view the agenda on their phone. Back then I said:

    …any changes to the SQLBits agenda (e.g. a room change) will automatically flow to your own calendar service and if you have that calendar service (e.g. Hotmail Calendar, Google Calendar) synced to your phone then the changes will automatically show up there too … That new SQLBits subscribable calendar lives at http://sqlbits.com/information/SQLBitsCalendar.ashx; note how it is not specific to a particular conference - subscribe to (don't import) that calendar and the agenda for future SQLBits conferences will automatically flow to you too.

    Sure enough I took a look at the calendar on my phone today and saw this:

    image

    (That “Real world SSDT” session at 14:40 is being presented by Craig Ottley-Thistlethwaite and myself by the way. Hope to see you there!)

    This is the value of subscribing as opposed to importing. The agenda for next week’s conference has already flowed to my phone without my having to do anything. This isn’t the same phone that I had a year ago either, by subscribing to it in my Outlook.com (nee Hotmail) Calendar those subscriptions are stored and flow onto any new phone as soon as I type in my credentials.

    I have stated before that I believe subscribable calendars to be a transformative technology and this is why, I only had to subscribe to the calendar once and data that didn’t even exist back then simply flows into my calendar and thus onto my phone. If this interests you then maybe read how I think the same technology could be used to deliver BI data too at Thinking differently about BI delivery. And if you want to subscribe to the calendar yourself go and read the aforementioned blog post, that link again: SQLBits now publishing all SQLBits agendas as an iCalendar.

    @Jamiet

  • Creating your own SQL snippets in SSDT

    SQL Server Data Tools (SSDT) has a neat feature where you can add snippets into your scripts via the right-click context menu:

    image

    I’m finding it very useful indeed. The same feature exists in SQL Server Management Studio (SSMS) as well by the way:

    image

    One thing I really wanted to be able to do was create my own snippets for SSDT and I ventured to the SSDT forum to ask if it was possible. Turns out the answer is “yes” and Gert Drapers replied to my thread by providing an excellent run through of how to do it. What Gert’s post didn’t quite clarify is that if you follow his instructions then you have to manually edit the .snippet file that you created so that the language is set to SQL_SSDT:

    image

    (well, I had to do that anyway)

    Once you do that you’ll be able to import the snippet into Visual Studio (Gert’s post shows you how) and thereafter your snippets will show up in the snippets menu:

    image

    Very handy indeed.

    @Jamiet

  • Data Warehouse modelling deliberations – foreign keys and unknown members

    Earlier today I posted the following question on Twitter:

    image

    Foreign keys in a data warehouse. Yes or no? This discussion is looming at work, i know my position but am interested in what others think.
    (https://twitter.com/jamiet/status/324215986729385986)

    Specifically, I wanted to know if people were in favour of creating foreign keys from their fact to to their dimension tables or not.

    To say it prompted a good response is a bit of an understatement, 38 responses so far and counting. Here are some of those responses:

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    It certainly seems to be an emotive subject and its clear (to me) that there’s no correct answer, just lots of opinions. That’s a good thing. The majority of responders appeared to be of the opinion that a data warehouse should contain foreign keys and that is my position too. In this blog post I want to outline why I believe that one should create foreign keys from a fact table to its dimension tables:

    Maintain integrity

    Of course, this is the main reason why foreign keys exist – to protect the integrity of your data. I see no reason not to use them for this purpose in a data warehouse. The main argument that I see going against is that with a sufficiently robust ETL solution it shouldn’t be necessary. That is true but I would counter with “how do you know that your ETL solution is suitably robust?” I don’t think its possible to anticipate every eventuality that may arise and for that reason I like the safety net that foreign keys provide. I liked Devin Knight’s response here, foreign keys breed confidence.

    Communicate business logic

    When I join a project that has a database in place the first thing I do is try and understand the data model – to do that I go and look at the foreign keys in that database. Understanding the dependencies between entities is crucial in any data model and the best means of communicating those is via foreign keys. If I encounter a database that is bereft of foreign keys then my heart sinks a little.

    Performance

    Foreign keys can, in some circumstances, be beneficial in improving query performance. Take a read of A Check and Foreign Key Constraint Improves Query Performance by Sarvesh Singh or Do Foreign Key Constraints Help Performance? by Grant Fritchey.

    Code Generation

    I am a big fan of generating ETL code where possible and foreign keys can be invaluable when doing so.

     

    As I said there is no correct answer here so if you have any opinions, either agreeing or disagreeing, I look forward to reading your thoughts in the comments below.


    Mark Stacey’s comment prompted an interesting digression into talking about surrogate keys for denoting unknown members and this is something I have strong opinions on too:

    SNAGHTML1ce05180

    In short, I don’t like the practice of using “–1” as the surrogate key for an unknown member. My reasoning is simple, I don’t like giving meaning to something that is supposed to be meaningless. How then should we indicate which is the unknown member? I propose a single-row table that includes a column for each dimension table, each with a foreign key to the unknown member in the respective dimension table.

    Moreover I don’t like the practice of starting surrogate key counters from 1; the first value available for the integer datatype in SQL Server is –2147483648  so why not start from that?

    I discuss both of these issues in much more depth at Considering surrogate keys for Unknown members.

    Again if you have any thoughts on these subjects please put them in the comments. If nothing else I find it both fun and educational to debate this stuff.

    @Jamiet

    Update, Chris Adkin posted a comment below that contained a link to Microsoft's own guidance on building datawarehouses where it is stated:

    "Many physical designs for data warehouses follow the star schema but do not completely specify the relationships between the fact and dimension tables, as mentioned earlier for foreign key constraints, for instance. Without the foreign key constraints explicitly specified, SQL Server must depend on heuristics to detect star schema query patterns."

    Chris' take on this:

    "So, assuming we are talking about a Kimball DW, there is most definitely value in using foreign key contstraints as this provides a fail safe for the heuristics getting it wrong."

    Thanks Chris. 

  • Geoflow? Is that it, Microsoft?

    I didn’t travel to the SQL Pass Business Analytics conference this week but I keenly followed what was going on via the #passbac hashtag on Twitter. Seemingly the big announcement was Geoflow Preview for Excel 2013, an add-in for Excel that visualises data over space and time (read more at Public preview of project codename “GeoFlow” for Excel delivers 3D data visualization and storytelling). Geoflow certainly looks compelling at first glance though I must say I found it rather strange that it got top billing given that Microsoft were talking about it at some SharePoint conference five months ago but nonetheless the keynote demo was apparently very impressive indeed. Here’s a screenshot of Geoflow:

    I think Geoflow looks great, I really do; the questions I immediately had about it were:

    • Can I share my Geoflow’d Excel workbooks and have people view them on SharePoint? Answer: No, there’s no SharePoint collaboration story.
    • Is Geoflow part of Power View? Answer: No, its a separate installation.
    • OK so I have to install it. I presume then that its available in the Office App Store given that’s the new model for distributing Excel add-ins? Answer: No, you have to download it from Microsoft’s download site.

    So the big reveal from Microsoft at this conference was an Excel add-in that does something very very cool but had already been announced, is only a preview, doesn’t fit with Microsoft’s BI collaboration strategy, doesn’t use their modern distribution platform and isn’t part of their Excel-based data visualisation tool. Well thank god I didn’t stump up the cost of travel, accommodation, loss of income and time away from the family for that! Doubtless there were a lot of other good reasons to go to the conference but I would have been going with high expectations of news from Microsoft that is going to be compelling and help me sell Microsoft’s BI offering to my clients – Geoflow doesn’t do that, not by a long chalk.


    What I was hoping for, nay expecting, was a concrete announcement regarding Microsoft’s mobile BI strategy. We first saw Power View demonstrated on an iPad at the PASS 2011 conference and I assumed that in the intervening eighteen months they might have built something we could actually install and play around with. Apparently not.

    Microsoft are getting killed in this area. At my current client all the management folk walk around with iPads glued to their hands – this is the tool on which they consume information and Microsoft doesn’t have anything for them. I was working for a client two years ago that had just invested in a product called RoamBI  because it enabled them to view Reporting Services reports on an iPad. Two years ago for pity’s sake, and Microsoft haven’t released anything mobile-BI-related since!

    @Jamiet

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement