THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a freelance data mangler in London

  • Clone an Azure VM using Powershell

    In a few months time I will, in association with Technitrain, be running a training course entitled Introduction to SQL Server Data Tools. I am currently working on putting together some hands-on lab material for the course delegates and have decided that in order to save time in asking people to install software during the course I am simply going to prepare a virtual machine (VM) containing all the software and lab material for each delegate to use. Given that I am an MSDN subscriber it makes sense to use Windows Azure to host those VMs given that it will be close to, if not completely, free to do so.

    What I don’t want to do however is separately build a VM for each delegate, I would much rather build one VM and clone it for each delegate. I’ve spent a bit of time figuring out how to do this using Powershell and in this blog post I am sharing a script that will:

    1. Prompt for some information (Azure credentials, Azure subscription name, VM name, username & password, etc…)
    2. Create a VM on Azure using that information
    3. Prompt you to sysprep the VM and image it (this part can’t be done with Powershell so has to be done manually, a link to instructions is provided in the script output)
    4. Create three new VMs based on the image
    5. Remove those three VMs


    The script has one pre-requisite that you will need to install, Windows Azure Powershell. You also need to be a Windows Azure subscriber which, if you're reading this blog post, I'm assuming you already are.

    Simply download the script and execute it within Powershell, assuming you have an Azure account it should take about 20minutes to execute (spinning up VMs and shutting the down isn’t instantaneous). If you experience any issues please do let me know.

    There are additional notes below.

    Hope this is useful!



    • Obviously there isn’t a lot of point in creating some new VMs and then instantly deleting them. However, this demo script does provide everything you need should you want to do any of these operations in isolation.
    • The names of the three VMs that get created will be suffixed with 001, 002, 003 but you can edit the script to call them whatever you like.
    • The script doesn’t totally clean up after itself. If you specify a service name & storage account name that don’t already exist then it will create them however it won’t remove them when everything is complete. The created image file will also not be deleted. Removing these items can be done by visiting
    • When creating the image, ensure you use the correct name (the script output tells you what name to use):


    • Here are some screenshots taken from running the script:



    • When the third and final VM gets removed you are asked to confirm via this dialog:


    Select ‘Yes’

  • SQL Relay 2013R2 – Bigger, faster, SQL-ier

    Fresh from a successful UK tour in June the SQL Relay team are at it again. This time there are ten venues all around the UK between 11th and 29th November.


    Head to to sign-up. I’ll be at the London one, hope to see you there!


  • SSDT gotcha – Moving a file erases code analysis suppressions

    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo.

    Let’s create a new project and add a stored procedure to it called sp_dummy.


    Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties:


    A subsequent build causes a code analysis warning as we might expect:


    Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it:


    That causes a suppression file to get created in our project:


    Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file:


    As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of.



  • Enforce SSIS naming conventions using BI-xPress

    A long long long time ago (in 2006 in fact) I published a blog post entitled Suggested Best Practises and naming conventions in which I suggested a bunch of acronyms that folks could use to prefix object names in their SSIS packages, thus allowing easier identification of those objects in log records, here is a sample of some of those suggestions:


    If you have adopted these naming conventions (and I am led to believe that a bunch of people have) then you might like to know that you can now check for adherence to these conventions using a tool called BI-xPress from Pragmatic Works. BI-xPress includes a feature called the Best Practices Analyzer that scans your packages and assess them according to some rules that you specify. In addition Pragmatic Works have made available a collection of these rules that adhere to the naming conventions I specified in 2006


    You can download this collection however I recommend you first read the accompanying article that demonstrates the capabilities of the Best Practices Analyzer. Pretty cool stuff.


  • SSIS ReportingPack v1.1.1.0 – Execution Stats visualised in a Gannt chart via usp_ssiscatalog

    There are many SSRS reports in SSIS Reporting Pack and the one I’ve perhaps spent most time on is the executable_duration report which provides this visualisation:


    I happen to think this is really useful because it shows the start time and duration of each executable relative to all the other executables in an execution. It does it by leveraging the execution_path property of an executable (which I am on record as saying is, in my opinion, the best feature in SSIS 2012).

    I have wanted to provide a similar visualisation in usp_ssiscatalog for some time and today I have finally gotten around to implementing it. With this new release one can call up the executable stats using this query:

    EXEC usp_ssiscatalog
    @a                      =  
    ,   @exec_execution         =  
    ,   @exec_events            =  
    ,   @exec_errors            =  
    ,   @exec_warnings          =  
    ,   @exec_executable_stats  =  

    And the returned dataset will include a new column called Gannt. This screenshot demonstrates the value that this provides:


    On the left we have the execution_path of each executable and on the right is a bar indicating the start time and duration relative to all the other executables. Collectively these bars provide a nifty Gannt chart-alike view of your execution thus allowing you to easily identify which executables are taking up the most time in your executions.

    In addition there is a similar view for the list of executions that is gotten by executing:

    EXEC usp_ssiscatalog @a='execs'

    That return a new column, relative_duration:


    which simply provides a bar for each execution indicating how long it took relative to all the other executions.

    I hope these new visualisations in usp_ssiscatalog prove useful to you. If they do (or even if they don’t) please let me know, I love getting feedback and I don’t get nearly as much as I would like for SSIS Reporting Pack. Get the goods from SSIS Reporting Pack v1.1.1.0.


  • Learn from me about SSDT in London, March 2014

    Microsoft released SQL Server Data Tools (SSDT) along with SQL Server 2012 in the Spring of 2012. Since then I’ve noticed an upward tick in both the number of organisations that are using SSDT and the number of questions that are getting asked about it on forums. There is some confusion about what SSDT actually is (Microsoft hasn’t helped there), why people should be using SSDT and, most importantly, how to make best use of it.

    Its clear that people want to learn more about SSDT so I have joined forces with Technitrain to offer a 2–day training course, in London, in March 2014 called Introduction to SQL Server Data Tools



    The course will cover:

    Day 1

    • SSDT Positioning
    • IDE Tour
    • Connected Database Development
    • Declarative, offline, database development
    • Publishing
    • Continuous Integration (CI) and Continuous Deployment

    Day 2

    • Data Publishing
    • Refactoring
    • Database unit testing
    • References and composite projects
    • Database Drift
    • Code analysis

    If this sounds like your bag then please sign up on the Technitrain website.


  • SSISReportingPack with usp_ssiscatalog v1.1 available – first release with integrated installer

    Since December 2010 I have been maintaining a suite of reports that can be used atop the SSIS Catalog in SSIS 2012, I called it the “SSIS Reporting Pack”. While building those reports I realised that there was also a need to make it easier to issue common but ad-hoc queries against the SSIS Catalog and that realisation gave rise to a stored procedure called sp_ssiscatalog which I first released in November 2012. Both of these pieces of open source software are hosted at however they have always been distributed separately, with separate install mechanisms for each.

    Until today that is. As of version 1.1 a single installer exists that will provide all the artifacts required for both the reports and usp_ssiscatalog:


    Head to to get the goods. I have updated the home page in an attempt to make the installation instructions clearer for both the reports and usp_ssiscatalog.


    You probably want to know what new features are in this release. Well, not all that much if I’m honest. Most notable are:

    • A fix for a user reported issue where the reports would fail to deploy due to error The definition of the report '/folder-project-package' is invalid. The value expression for the text box 'breadcrumbFolder' referes to a non-existing report parameter 'FolderId'".
    • The eagle-eyed of you may have noticed that usp_ssiscatalog is a new name for this stored procedure, it used to be called sp_ssiscatalog. The reason for the change is simply to follow best-practice for naming of SQL Server stored procedures where any such object in a user database should not be prefixed “sp_”. The previous name came about because the original intention was for this stored procedure to live in [master], as it transpired this would not have been a wise decision however the name of the stored procedure had already been established. Any call to sp_ssicatalog will still work for the foreseeable future however I recommend that you should change to use usp_ssiscatalog instead, if you can.

    The big story here is that the reports and usp_ssiscatalog are now delivered in the same installer and to prove it here is a screenshot of the installer:


    This lays a foundation for future enhancements that I want to make. Watch this space.


  • An ETL joke #fridayfun

    I received a text from my bank recently where they appear to have mistaken me from someone with a rather strange first name. As an ETL developer by trade there really was only one possible response:



    This one amused me given the hype around Hadoop at the moment:


    Have a good weekend folks!


  • Declarative ETL

    T-SQL is a declarative programming language. This means that a T-SQL developer defines what (s)he wants the query to do, not how to do it. The secret sauce that turns the “what” into the “how” is a piece of software called the query optimiser and there are tomes dedicated to how one can make the query optimiser dance to their will. Generally the existence of a query optimiser is a good thing as in most cases it will do a better job of figuring out the “how” than a human being could*.

    SSIS dataflows on the other hand are an imperative programming language**, the data integration developer dataflow builds a data pipeline to move data exactly as (s)he desires it to happen. In SSIS, There is no equivalent of the query optimiser.

    I’ve often pondered whether there is an opportunity for someone to build a declarative data pipeline, that is, a method for a data integration developer to define what data should be moved rather than how to move it. Over the last few months I’ve come to the realisation that Power Query (formerly known as Data Explorer) actually fits that description pretty well. If you don’t believe me go and read Matt Masson’s blog post Filtering in Data Explorer in which he talks about query folding:

    [Power Query] will automatically push filters directly to the source query

    Even though we selected the full table in the UI before hiding the columns certain columns, we can see the source query only contains the columns we want

    “filters” aren’t the only type of query folding that Data Explorer can do. If you watch the queries, you’ll see that other operations, such as column removal, renaming, joins, and type conversions are pushed as close to the source as possible.

    Let’s not idly dismiss this. Query folding is very impressive and is (I think) analogous to predicate pushdown that is done by the query optimiser (for more on predicate pushdown read Nested Views, Performance, and Predicate Pushdown by Dave Wentzel). Does Power Query then have the equivalent of a query optimiser? I would say yes, it does although its more accurate to say that its the underlying query language called M for which this query optimiser exists!

    So, we have a declarative data integration language which is surfaced in Power Query and hence can only (currently) push data into an Excel spreadsheet. Imagine if M were extended were extended into a fully-fledged data integration tool that could move data between any heterogeneous source, would that constitute “declarative ETL” and is there a need for such a tool?

    I’ll leave that one out there for you to ponder. Comments would be most welcome.


    *OK, you might debate whether the query optimiser can do a better job than a human but let’s save that for another day.

    **We can debate whether or not SSIS dataflows are a programming language or not but again, let’s save that for another day. For the purposes of this discussion just go with me, OK.

  • A fool leaves their computer unlocked. What to do next?

    A light hearted post for a Friday…

    A couple of days ago on Twitter I asked…


    The replies flew in thick and fast, there are some fairly vindictive people out there in the Twitter community let me tell you so i thought it’d be fun to show you a sample! Let’s go with a couple of harmless non-evasive suggestions first from Jens Vestergaard and Randi Borys:



    The screenshot one I particularly like. Those two are pretty tame though, if you want to get really nasty then try this suggestion from Brent Ozar:


    Oh my word! In true “embrace and extend” style though Brent and Rob Farley built upon the idea:



    Nasty!!! Mr Farley upped the cruelty ante though with his next suggestion which I think might be my favourite:


    Do that to a green SQL developer and soon they’ll be thinking they’re in the seventh circle of Microsoft-tool-hell! Not to be out-done Brent replied with the ever popular:


    I can think of a few variations on that theme – the hosts file might be one thing you might want to target.

    Want to really make someone really angry? Kenneth Nielson has a suggestion:


    If humiliation tactics are your bag Andy P has one for you:


    And for the time-poor amongst you Koen Verbeeck offers:


    Lastly if pure criminality is your bag, as it clearly is for Rowena Parsons, simply:


    Check all of the suggestions here: and if you have any suggestions of your own please stick them in the comments below!


  • Is DQS-in-the-cloud on its way?

    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll:

    March 2012 – December 2012 (10 months)Redmond, WA

    Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.

     Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff.



  • Thoughts on Power BI for Office 365

    Three months ago I published a fairly scathing attack on what I saw as some lacklustre announcements at Microsoft’s BI conference. In that blog post, Geoflow? Is that it, Microsoft? I opined that their announcement of Geoflow wasn’t particularly earth-shattering and what they really should have been talking about was a mobile BI story. Apparently I wasn’t the only one who thought that too judging by the majority of the 52 comments that that post received. Here are a few few choice quotes:

    Agree wholeheartedly JT.  Given Sharepoint is the BI collaboration tool I would expect Geoflow to be integrated. I am also not a fan of separate tools for everything.  PV + Geoflow would be good/Better. – Allan Mitchell

    Given the hype, I'd have expected a lot more from MS on spatial visualization. Between pie charts on maps in PowerView, no HTML5 support for PowerView, and the whole "PowerView can't be used with the other SharePoint components, it only really works fullscreen and can't accept parameters" story, I really think MS has lost their way on this. – Mark Stacey

    Two weeks after the Data Explorer release, I did a demo at a SSUG with "teaser trailers" of GeoFlow and machine learning. I mentioned #passbac and boldly voiced that I expected mobile to be announced because I was so sure that PASS could not launch a new conference without an announcement of this scale and impact ... GeoFlow does not measure up. It's a "cool" product, but doesn't help the MS platform anywhere close to the gaping hole called mobile. – Toufiq Abrahams

    In the time it's taken Microsoft to release nothing useful that's mobile BI related (Geoflow looks cool but its practical use is within the BI stack is virtually nil) and at the same time manage to piss off the Excel community by effectively restricting the adoption of PowerPivot/Power View, most of the competition have not only released BYOD-friendly mobile BI apps but the most forward thinking have completely re-written their offerings in HTML5 (QlikView .Next for example). So still we wait, and at every pre-sales meeting where the "Can our execs use MSFT BI on their iPads?" question comes up (80% of the time now), I have to shift uncomfortably in my seat and say "No, and what's more, even as a MSFT partner, we have no clue when we will be able to say Yes!" Shambolic. – Will Riley

    And so having spent 2 years pushing the SharePoint message, despite huge reservations from the partner community, here we have a release of a front end tool that isn't supported through SharePoint. You want to put the fun back into BI? It might be fun producing flashy but ultimately pointless demos for serial conference attendees but it certainly isn't fun trying to position a Microsoft BI strategy to enterprise customers. And how many different but overlapping tools do we need? I now need two monitors side by side to view my Excel ribbon. CONSOLIDATION please. – Mick Horne

    Another Partner here agreeing that it does not help us sell a coherent enterprise BI strategy.  This is what we need to see from MS far more than yet another tool. – Calvin Ferns

    You get the idea!

    Well, better news cometh. Three months after that post and Microsoft have finally announced something that is intended to alleviate many of the concerns expressed above – its called Power BI for Office 365 and it was announced at Microsoft’s Worldwide Partner Conference. Power BI for Office 365 brings together the existing disparate tools PowerPivot (now renamed Power Pivot), Power View, Data Explorer (now renamed Power Query) and Geoflow (now renamed Power Map) into a single offering, incorporates an online collaboration portal and also (finally) mobile apps for iPad and Windows 8/RT (no word on Android or Windows Phone yet).

    As a BI/data integration practitioner in the Microsoft space I’m excited about this, I really am. I’ve had some illuminating Twitter conversations in recent days with people that clearly have a different opinion, and that’s OK. By the end of this post I hope I’ve explained exactly why I’m excited about Power BI.

    Branding and focus

    The first important take away is that they’re coalescing under a single moniker, a single brand – “Power”. Power BI, Power Pivot, Power Query, Power View, Power Map (not Power Maps as some commentators would have you believe). Whether or not you like the word and its connotations its great to see consistency in the naming. Also, as confirmed by Kasper de Jonge, there is now a space between all the words (i.e. Power Pivot not Powerpivot) which I think is also a good thing – consistency is important.

    Above all I sense some focus around a coherent offering here whereas beforehand Microsoft’s BI offering seemed rather rudderless and haphazard – that should be good news to those Microsoft partners  quoted above that lament the current state of affairs. Will Power BI for Office 365 have frustrating limitations and bugs? Sure it will, but I have optimism that it has sufficient priority that those things will be addressed and not left to fester.


    Most of the articles I’ve seen relating to Power BI up to now seem focused on the four aforementioned Excel add-ins but, to me, the most interesting aspect of this suite is the BI portal portion. Frequent readers of my blog may have noticed a theme running through a lot of my posts down the years, that of sharing pointers to data as opposed to a copy (I hate getting email attachments for example, and I always espouse subscribing to calendars rather than importing them). The BI portal portion of Power BI for Office 365 speaks to this theme by hosting queries that are built in and published from Power Query so that they can be consumed by other people. The obvious point here is that by sharing a query one is not sharing a copy of the data, but actually a pointer to the data and that is very exciting to me. Moreover its only the data that is being pointed at, presentation and further manipulation thereof is left as an exercise for the consumer and again, that’s something I really like. This is not a panacea to the proliferation of data silos (mostly in Excel and Access) that are littered throughout organisations but I am hopeful that its a step in the right direction.

    I am also hopeful that the BI Portal becomes a focal point for related offerings. For example, I would love to see SQL Server Master Data Services (MDS) offered through the BI Portal and here’s an anecdote that explains why.

    Only yesterday on my current project we were discussing a data hierarchy that is an important dimension of the organisation – stock status (my client is an online retailer so stock status is rather crucial). It transpired that the hierarchy that the business used to classify stock status was not actually stored anywhere other than in peoples’ heads and hence if we were to provide data against that stock status hierarchy we would need to store the hierarchy somewhere. This is a SQL Server shop so the idea of using MDS arose but was quickly discounted as the provisioning of new infrastructure was considered too big an undertaking to hold what amounts to a lookup table (let’s be honest, whatever you dress MDS up as its still basically a bunch of lookup tables).

    This is a perfect scenario for MDS and I am hopeful that Microsoft are considering offering MDS as a hosted service because if it were it would have been a no-brainer for us to use – we would not have to provision hardware and do installations, all that would be required would be to walk up with a credit card. MDS as a software-as-a-service (SaaS) offering through the Office 365 BI Portal would be, in my opinion, compelling (my current client is an Office 365 customer by the way). The same applies to Data Quality Services (DQS) as well, that’s another service that would benefit from being offered on a SaaS basis.

    What else might benefit from being offered through a single BI Portal? Well if we look outside the four walls of Microsoft there are plenty of companies that might welcome the opportunity to provide their wares through Office 365 – Tableau, Microstrategy, Business Objects (SAP) and Predixion are some that spring to mind. This is rather future-thinking I admit but I do think there’s real potential here.

    I’ve discussed Power BI for Office 365 with some folks who opine that this Power BI announcement is little more than a renaming of existing Excel add-ins but in my opinion the provision of a BI Portal for collaboration is the big story here.

    Data Governance

    Another big take away (which has not been apparent from some of the early demos but which I have gleaned from early exposure to Power BI) is that data stewardship is a big piece of this puzzle. Data stewards will be relied upon to:

    • grant/revoke access to data sources, reports and queries
    • curate metadata/annotations on the Power Pivot model and Power Query queries in order to drive the Natural Query capabilities and surfacing in Power Query search results
    • Monitor usage via the Data Steward Portal and thus allow stale and/or unused data to be expired
    • Manage data refreshes (including installation of the Power BI on-premises agent that provides refreshes from on-premises data sources)
    • Evangelising the existence and use of data available through Power BI

    amongst other things.

    It is clear to me that in sufficiently large organisations this "data steward" role is going to be a full-time role and, again in my opinion, Power BI implementations could live or die by the diligence of the nominated data steward(s). I view this as a good thing – I’ve seen countless organisations where there is a chronic lack of governance applied to data and consequently data quality suffers as a result – anything that brings the issue of data governance to the fore is a boon. I envisage that in a few years organisations will have a dedicated "Data stewardship" function; data stewardship will become a discipline all of its own and there will be an ecosystem that grows up around this (tooling, forums, training courses, conference tracks etc…). As I alluded, I think this is a good thing – organisations that properly govern their data will be leaders in the digital economy. Its also a great opportunity for people like myself that provide consultancy services – helping organisations to make better use of their data is where I want to add value.

    Microsoft will not of course talk about this explicit need for a Data Steward as it detracts from the self-service mantra that they are espousing – at the end of the day Microsoft want to sell licenses, and I’m fine with that. The community can fill the void here with appropriate messaging and, again, I see this as an opportunity. Power BI itself is not going to solve the data governance problems that proliferate in organisations today (only human beings can do that) but I harbour hopes that it can raise awareness of the issue and enable change.

    Office 365

    Ah, the Office 365 conundrum. If you’re not using Office 365 then there’s no Power BI for you. Yet. Personally this doesn’t bother me, my most recent two clients are both Office 365 customers (as am I) and the notion of SaaS (which Office 365 quite clearly is) resonates with me a lot better than on-premise notions of procuring software/hardware then having to install/configure it all. Here’s my money, now get out of the goddamn way – I much prefer that way of working. Evidently a lot of folks disagree and would rather run their own commodity services like email & IM. I can understand that and if people feel the need to do so, good luck to ‘em – like it or not though its clear that Microsoft are pushing hard on Office 365 and I don’t imagine that changing any time soon.

    HTML5 Power View

    There was a rumour flying around on the day of the Power BI announcement that Power View had been rewritten to use HTML5 instead of Silverlight. Well, that’s partially true, HTML5 variants of HTML5 are coming but until you hear any different Power View in SharePoint and in Excel is still going to be using SilverLight. I’m looking forward to the day when Power View runs everywhere without SilverLight as this presents a great opportunity for Power View to proliferate. I recently tweeted:

    PowerView needs to be ubiquitous. Wherever there's data(SP, PerfMon, SSMS, Dynamics, Eventvwr, etc) should be a "view in PowerView" button

    I really hope this happens. Power View is a great ad-hoc visualisation tool – I hope it spreads like a weed throughout Microsoft products. Moreover I hope it spreads through the web too, Microsoft already provide the Excel interactive button that can turn any HTML table into an Excel workbook, how about a similar button that turns any HTML table into a Power View report? (you can have that idea for free, Microsoft. You’re welcome! Smile )

    I have quibbles as well

    Not all that’s been announced for Power BI for Office 365 is quite as I would like it to be. I spoke in glowing terms above about the ability to share Power Query queries via the BI Portal, the downside to this is that the consumer also needs to have Power Query (and Excel) installed – Power Query is both the query builder and the query viewers. In the tech preview of Data Explorer that was made available two and a half years ago that wasn’t the case, queries could be shared and were then available as an OData feed which meant that anything that could consume OData (which, these days, is rather a lot) could make use of it. Indeed I built my own query that made Google Reader subscriber counts available as an OData feed. I am passionate about making data available in a manner that doesn’t require the consumer to be using any particular software so I harbour high hopes that this feature returns to Power BI for Office 365 in the future and Power Query can live in the feeds ecosystem that I believe is coming.

    I’m still not happy that Power Map is a separate thing – I am still of the opinion that it should simply be built into Power View.

    There is currently no developer story, Power BI for Office 365 isn’t extensible. That would change if queries were exposed as OData.


    That’s my take on what has been announced for Power BI for Office 365. I see this as a more focused approach to BI from Microsoft after years of shooting in the dark (anyone remember ProClarity??). I’m looking forward to exploring this when it becomes available next month.


  • Event Time Announcer – simple and easy. Please use it!

    I’m a big proponent of using online services to simplify our lives, that’s why I’m a huge advocate of subscribable calendars (aka iCalendar) which I talk about frequently on this blog.  In a similar vein I’ve come across a service from called Event Time Announcer, the premise is incredibly simple but very useful. Quite simply it allows you to specify a date and time and it will show you what the time will be in various cities in the world at that exact point in time, you can then share those results using a URL. For example following this URL: shows the start and end time of a meeting that I concocted for various cities around the world:


    I constantly have to wrestle with meeting invites littered with meaningless (to me) time zone codes like {ET, CET, PST, PDT} so a service like Event Time Announcer could be invaluable, if only people could be persuaded to use it. So, I implore you (and by “you” I mainly mean people in Microsoft), if you’re sending me times in your own time zone please have the decency to provide a link from Event Time Announcer so I can fathom what they actually mean. It would be greatly appreciated!


  • sp_ssiscatalog v1.0.4.0 available now with new feature - view most recent failed execution [SSIS]

    Development of sp_ssiscatalog continues apace with a new release today (4th July 2013). In this release I have enabled a new action called ‘exec_failed’ which can be simply described as “Show me the most recent failed execution”.

    The syntax is very simple:

    exec sp_ssiscatalog @action = 'exec_failed'
    exec sp_ssiscatalog @a = 'exec_failed' /*shortened form*/

    That command will dig into the SSIS catalog to find the most recent failed execution and display the same information that would be displayed, by default, for any execution


    You can use the existing parameters @exec_warnings, @exec_errors, @exec_events, @exec_executable_stats, @exec_events_packages_excluded to modify the resultsets that are returned by default and hence if you want to simply view only the errors its

    exec sp_ssiscatalog


    The main aim here was to provide easy access to information about the most recent failed execution but I implemented it in such a way that its easy to return information on any execution status hence all of the following can be used:

    exec sp_ssiscatalog @action = 'exec_created'
    exec sp_ssiscatalog @action = 'exec_running'
    exec sp_ssiscatalog @action = 'exec_canceled'
    exec sp_ssiscatalog @action = 'exec_failed'
    exec sp_ssiscatalog @action = 'exec_pending'
    exec sp_ssiscatalog @action = 'exec_ended_unexpectedly'
    exec sp_ssiscatalog @action = 'exec_succeeded'
    exec sp_ssiscatalog @action = 'exec_stopping'
    exec sp_ssiscatalog @action = 'exec_completed'

    Download the goods from here. Steps to install are at Installation Instructions.

    Its worth pointing out that this feature was implemented solely because it was requested by someone going by the name mbourgon. He/she let a comment on my blog post Introducing sp_ssiscatalog v1.0.0.0, I thought it was a great idea so I put it in. if you have an idea for something you’d like to see in sp_ssiscatalog then let me know, I might just put that in too!


  • June 2013 release of SSDT contains a minor bug that you should be aware of

    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it.

    I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]:


    The two tables have exactly the same definition, both a have a single column called [Id] of type integer.

    CREATE TABLE [dbo].[Table1]

    My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results:

    CREATE PROCEDURE [dbo].[Procedure1]
        FROM    Table1 t1
        INNER JOIN Table2 t2
            ON    t1.Id = t2.Id
        ORDER BY Id

    Now if I create those three objects manually and then execute the stored procedure, it works fine:


    So we know that the code works. Unfortunately, SSDT thinks that there is an error here:


    The text of that error is:

    Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id].

    Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day.

    Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version.

    Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say!

    Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed?


This Blog


Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement