THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

The Bit Bucket (Greg Low): IDisposable

Ramblings of Greg Low (SQL Server MVP, MCM and Microsoft RD) - SQL Down Under

  • SSMS SQL Server 2014 CU4 Issue: The type initializer for ‘PropertyMetadataProvider’ threw an exception


    When I installed CU4 for SQL Server 2014, I started to receive an error in SSMS (SQL Server Management Studio) every time I connected to a database server:


    It was interesting that it was a type that wasn’t being found in Microsoft.SqlServer.SqlEnum. I presumed it was a problem with a type being missing in that DLL and that I must have had an older one.

    Turns out that the problem was with the Microsoft.SqlServer.Smo.dll.

    The product team told me that “bad” DLL versions were pushed out by the first released version of SSDT-BI for VS2013. All was working fine though until I applied CU4; then the errors started.

    While the correct file version was 12.0.2430.0, and that was the one I had in the folder, the issue seems to actually relate to the Microsoft.SqlServer.Smo.dll, not to the SqlEnum dll. For some reason the installer didn’t correctly replace the previous entry in the GAC. It was still a 12.0.2299.1 version.

    What I ended up having to do was to use ProcessExplorer to locate the dependencies of the ssms.exe process, then find the version of Microsoft.SqlServer.Smo.dll that was being loaded. I renamed it to stop it being loaded and rebooted. Then I found I still had the error and there was another location that it loaded it from. Again I renamed it and rebooted. Finally the error said that it couldn’t find the file at all.

    At this point, I did a Repair of the Shared Features on “SQL Server (x64)” from “Programs and Features”, then deinstalled CU4, rebooted, then reinstalled CU4.

    And now it seems to work ok. (Although it did nuke the previous entries from the connection dialog in SSMS)

    Hope that helps someone.

  • FIX: Visible preview window in SQL Server Reporting Services gone!

    It’s great to see that the Connect site leading to fixes in the product.

    I was really pleased when SQL Server Data Tools for BI appeared for Visual Studio 2013. What I wasn’t pleased about where a number of UI issues that came with that version.

    In particular, there was a problem with previewing Reporting Services reports. If I create a new report project, add a blank report, and drag on a text box:


    Note that when I clicked the Preview button, the following appeared:


    It appears that the preview is provided by a program called PreviewProcessingService.exe that I presume was meant to be launched on-demand in the background. If you closed the window, an error appeared in your preview. If you minimized it, you could happily ignore if from that point on.

    I reported it in the Connect site, and am happy to see today that a new KB article appeared with a fix for it.


    What the KB 2986460 article provides is a link to a new downloadable version of SSDT-BI for VS2013:


    When the article first appeared, I downloaded the version immediately. It did not fix the problem. Unfortunately, the KB article appeared one week before the download was updated. If you downloaded it before and it did not fix the problem, unfortunately you will need to download it again. Here are the file properties of the version with the fix:


    Be forewarned that the fix is a complete replacement that is 1GB in size:


    It would be great if they can get to the point of patching these programs without the need for complete downloads but I’m very pleased to see it appear regardless.

  • Easier way to try SQL Server Availability Groups using Azure

    Over the last year, I’ve been delivering a number of partner enablement training sessions for Microsoft. They target the changes in SQL Server 2014 and also provide an introduction to Power BI.

    During the sessions for day two, I have been getting the attendees to set up a high availability environment in Azure, with a Windows Server 2012 R2 domain controller and a couple of SQL Server 2014 Enterprise member servers. This process basically involves:

    • Create the virtual network
    • Create the cloud service
    • Create and configure the domain controller VM and a fileshare
    • Create and configure the SQL Server 2014 VMs
    • Create a windows cluster for the three machines
    • Configure SQL Server Availability Groups
    • Configure an availability group listener that connects via the cloud service

    That spread over three labs that took about two hours in total. It’s a good learning exercise because it gets you to understand every step along the way.

    However, if you just want this type of setup created very quickly, there’s now a much easier way to do it within minutes. The SQL Server team have created a template that already includes all these options. When I first went looking for it, I couldn’t find it in the templates list:


    I was still using the old portal. I needed to look in the new portal. You can get from the old portal to the new one by this link in the top right-hand corner. You need to click the little person icon, then choose from the menu:



    In the new portal however, it’s easy to find:


    I’d encourage you to try it to see how it simplifies the process.

    By the way, if you are a Microsoft Partner and want to attend one of the upcoming sessions (in Melbourne, Perth, and Sydney), ping your partner contacts and find out if you can attend. I’d love to see you there.

  • Book Review: Pro Spatial with SQL Server 2012 - Alistair Aitchison


    I’m finally catching up on some book reviews.

    One that I’ve been meaning to write for quite a while is this review.

    What an awesome book. The review is here:

  • Azure SQL Database Hourly Billing

    There were a number of great announcements from Scott Guthrie today:

    One that particularly caught my eye was a subtle change to the billing for Azure SQL Database. Databases used to be billed at a daily charge.

    While this might not have seemed a big deal, the only way to create a transactionally-consistent backup of the database was to first copy it using CREATE DATABASE AS COPY OF original database, then to use the database export functions on the copy. The export needed to be done on the copy (not on the original) as it just copied the data in each table, one table at a time.

    If you did this operation once per day, you would have doubled the database costs. I have a customer who wanted to create 3 copies a day of 3,000 databases. That would have meant paying for 12,000 databases.

    I’m glad to see that sense has prevailed on this. With the new billing option, this customer would now pay for the 3,000 databases for 24 hours a day, and 3 hours per day of additional databases, for an effective equivalent cost of 3,375 databases. That’s a big step forward from paying 12,000 databases. It’s enough of a change that it might make the project workable where it was not workable before.

    The new arrangement is much more “cloudy” and in line with how other Azure services are charged. It’s very welcome thanks Microsoft!

  • Updated offline SQL Server 2014 Books Online install now includes T-SQL Reference

    Many customers want to install Books Online (BOL) when working with SQL Server. Generally I always think it’s a good idea.

    Not all environments though, allow external connectivity for the systems where the clients want to install BOL.

    A friend of mine Rob Sanders documented how to install BOL in an offline method a while back:

    The problem with this was that the T-SQL Reference documentation was not included in the download for offline installation. Given the T-SQL language reference was the major reason that most people wanted to install BOL, it all seemed quite pointless.

    I got the good news this week from Jeff Gollnick in the documentation team that the August release now includes the T-SQL language reference and that it’s available here:

    That’s great news.

  • Azure Search Preview

    One of the things I’ve been keeping an eye on for quite a while now is the development of the Azure Search system.

    While it’s not a full replacement for the full-text indexing service in SQL Server on-premises as yet, it’s a really, really good start.

    Liam Cavanagh, Pablo Castro and the team have done a great job bringing this to the preview stage and I suspect it could be quite popular. I was very impressed by how they incorporated quite a bit of feedback I gave them early on, and I’m sure that others involved would have felt the same.

    There are two tiers at present. One is a free tier and has shared resources; the other is currently $125/month and has reserved resources. I would like to see another tier between these two, much the same way that Azure websites work. If you have any feedback on this, now would be a good time to make it known.

    In the meantime, given there is a free tier, there’s no excuse to not get out and try it. You’ll find details of it here:

    I’ll be posting more info about this service, and showing examples of it during the upcoming months.

  • 3rd Party Tools: dbForge Studio for SQL Server

    I've been taking a look at some of the 3rd party tools for SQL Server. Today, I looked at DBForge Studio for SQL Server from the team at DevArt.

    Installation was smooth. I did find it odd that it defaults to SQL authentication, not to Windows but either works fine.

    I like the way they have followed the SQL Server Management Studio visual layout. That will make the product familiar to existing SQL Server Management Studio users.


    I was keen to see what the database diagram tools are like. I found that the layouts generated where quite good, and certainly superior to the built-in SQL Server ones in SSMS. I didn't find any easy way to just add all tables to the diagram though. (That might just be me). One thing I did like was that it doesn't get confused when you have role playing dimensions. Multiple foreign key relationships between two tables display sensibly, unlike with the standard SQL Server version.

    It was pleasing to see a printing option in the diagramming tool.

    I found the database comparison tool worked quite well. There are a few UI things that surprised me (like when you add a new connection to a database, it doesn't select the one you just added by default) but generally it just worked as advertised, and the code that was generated looked ok.

    I used the SQL query editor and found the code formatting to be quite fast and while I didn't mind the style that it used by default, it wasn't obvious to me how to change the format. In Tools/Options I found things that talked about Profiles but I wasn't sure if that's what I needed. The help file pointed me in the right direction and I created a new profile. It's a bit odd that when you create a new profile, that it doesn't put you straight into editing the profile. At first I didn't know what I'd done. But as soon as I chose to edit it, I found that a very good range of options were available.

    When entering SQL code, the code completion options are quick but even though they are quite complete, one of the real challenges is in making them useful. Note in the following that while the options shown are correct, none are actually helpful:


    The Query Profiler seemed to work quite well. I keep wondering when the version supplied with SQL Server will ever have options like finding the most expensive operators, etc. Now that it's deprecated, perhaps never but it's great to see the third party options like this one and like SQL Sentry's Plan Explorer having this functionality.

    I didn't do much with the reporting options as I use SQL Server Reporting Services.

    Overall, I was quite impressed with this product and given they have a free trial available, I think it's worth your time taking a look at it.

  • Entering a backslash when using UK keyboard layout on US keyboard

    This is more for my own reference than anything else but today I was needing to enter a backslash key while using a US keyboard but with UK keyboard settings.

    After trying pretty much every key on the keyboard in all combinations, I realised there was no key combination that would do this directly. Curiously it's a common problem and I found untold blog and forum entries that were not helpful. They basically said to change your input to US when using a US keyboard.

    I figured there is always a way of entering a character by its code and of course this works. If you have a keypad, you can enter <Alt>92 and you'll get a backslash. That's all I needed in this case.

  • Step one in producing Shared Access Signatures for SQL Server Data Files in Azure - Generating a SHA2 256 HMAC Hash using Native T-SQL

    One of the projects I have been working on as time permits is to construct a way to create a shared access signature using native T-SQL. Shared access signatures are needed when working with SQL Server data files in Azure.

    Mostly, DBAs would use a programmatic utility such as Azure Storage Explorer to generate these signatures.

    It is also straightforward to do so with SQL CLR based code but this is not enabled in all environments.

    So I set about trying to create the required signatures using native T-SQL. I’m not concerned about the slow cryptographic performance as this won’t be a regular operation.

    I found some excellent code from Ryan Malayter on github: that does a SHA –1 or SHA – 512 hash. I’ve modified and enhanced it a bit so that it does SHA2-256 and performs the functions I need.

    After the function code, there is a set of test code that shows how the function satisfies the HMAC_SHA256 test vectors. Hope that helps someone. I’ll post more as I complete the next step in building a SAS generator.

    USE tempdb;





        @HashKey varbinary(8000),

        @ValueToHash varbinary(8000)


    RETURNS binary(32)




        -- Based on concept code from Ryan Malayter:

        DECLARE @k_ipad_partial bigint = CAST(0x3636363636363636 AS bigint);

        DECLARE @k_opad_partial bigint = CAST(0x5C5C5C5C5C5C5C5C AS bigint);


        IF LEN(@HashKey) > 64


            SET @HashKey = CAST(HASHBYTES('SHA2_256', @HashKey) AS binary (64));


            SET @HashKey = CAST(@HashKey AS binary (64));



        DECLARE @k_ipad binary(64)

          = CAST((SUBSTRING(@HashKey, 1, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 9, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 17, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 25, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 33, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 41, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 49, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 57, 8) ^ @k_ipad_partial) AS binary(8));



        DECLARE @k_opad binary(64)

          = CAST((SUBSTRING(@HashKey, 1, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 9, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 17, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 25, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 33, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 41, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 49, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 57, 8) ^ @k_opad_partial) AS binary(8));


        RETURN HASHBYTES('SHA2_256', @k_opad + HASHBYTES('SHA2_256', @k_ipad + @ValueToHash));




    -- Test with RFC4231 test vectors


    DECLARE @KeyToUse varbinary(4000);

    DECLARE @ValueToHash varbinary(4000);

    DECLARE @HashedValue varbinary(32);

    DECLARE @ExpectedResult varbinary(32);


    -- Test 1


    SET @KeyToUse = 0x0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b;

    SET @ValueToHash = CAST('Hi There' AS varbinary(1000));

    SET @ExpectedResult = 0xb0344c61d8db38535ca8afceaf0bf12b881dc200c9833da726e9376c2e32cff7;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 1 passed';


        PRINT '>>Test 1 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';



    -- Test 2


    SET @KeyToUse = CAST('Jefe' AS varbinary(4));

    SET @ValueToHash = CAST('what do ya want for nothing?' AS varbinary(1000));

    SET @ExpectedResult = 0x5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 2 passed';


        PRINT '>>Test 2 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';



    -- Test 3


    SET @KeyToUse = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa;

    SET @ValueToHash = 0xdddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd;

    SET @ExpectedResult = 0x773ea91e36800e46854db8ebd09181a72959098b3ef8c122d9635514ced565fe;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 3 passed';


        PRINT '>>Test 3 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';



    -- Test 4


    SET @KeyToUse = 0x0102030405060708090a0b0c0d0e0f10111213141516171819;

    SET @ValueToHash = 0xcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcd;

    SET @ExpectedResult = 0x82558a389a443c0ea4cc819899f2083a85f0faa3e578f8077a2e3ff46729665b;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 4 passed';


        PRINT '>>Test 4 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';



    -- Test 5


    SET @KeyToUse = 0x0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c;

    SET @ValueToHash = CAST('Test With Truncation' AS varbinary(4000));

    SET @ExpectedResult = 0xa3b6167473100ee06e0c796c2955552b;

    SET @HashedValue = CONVERT(varbinary(16),dbo.HMAC_SHA256(@KeyToUse, @ValueToHash));

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 5 passed';


        PRINT '>>Test 5 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';



    -- Test 6


    SET @KeyToUse = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa;

    SET @ValueToHash = CAST('Test Using Larger Than Block-Size Key - Hash Key First' AS varbinary(4000));

    SET @ExpectedResult = 0x60e431591ee0b67f0d8a26aacbf5b77f8e0bc6213728c5140546040f0ee37f54;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 6 passed';


        PRINT '>>Test 6 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';



    -- Test 7


    SET @KeyToUse = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa;

    SET @ValueToHash = CAST('This is a test using a larger than block-size key and a larger than block-size data. The key needs to be hashed before being used by the HMAC algorithm.' AS varbinary(4000));

    SET @ExpectedResult = 0x9b09ffa71b942fcb27635fbcd5b0e944bfdc63644f0713938a7f51535c3a35e2;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult


        PRINT '>>Test 7 passed';


        PRINT '>>Test 7 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';


  • Solving Errors with Existing BLOB Leases When Using BACKUP TO URL

    BACKUP TO URL was introduced as an add-on in Cumulative Update 2 for SQL Server 2012 Service Pack 1 and as a built-in feature for SQL Server 2014. I previously described this in a blog post.

    We have been using this in a variety of ways from on-premises systems:

    For example, it is an easy way to distribute a backup of a database to a large number of systems. Imagine you have a chain of retail stores that needs product and other reference information updated regularly. You can keep this data in a separate database at the head office, back it up to an Azure Storage account, and have each store download it separately.  This has major bandwidth and reliability improvements over other solutions such as having each store maintain a VPN connection to the head office.

    As another example, we have clients who simply aren’t able to gain enough space on their SANs to keep enough local copies of their backups.

    The more common scenario though is to use it for backups from Azure Virtual Machines that are running SQL Server. Rather than performing a backup to a virtual machine virtual disk, we achieve better performance by bypassing the file-system on the virtual disk and backing up directly to a URL. Both end up in Azure storage but backing up directly to storage gives us both better performance, and can help to avoid the drive limit and size restrictions for virtual machines.

    Regardless of why you are using BACKUP TO URL, one of the problems that you are likely to run into at some point is the dreaded:

                    Msg 3271, Level 16, State 1, Line 60
                    A nonrecoverable I/O error occurred on file
                    “ “ Backup
                    to URL received an exception from the remote endpoint. Exception Message: The remote
                    server returned an error: (412) There is currently a lease on the blob and no lease ID was
                    specified in the request...
                    Msg 3013, Level 16, State 1, Line 60
                    BACKUP DATABASE is terminating abnormally.

    Applications using Azure storage can take leases on files that are held in the storage containers. This avoids issues with other applications concurrently changing (or even deleting) files that the application needs. The BACKUP TO URL feature in SQL Server takes an infinite lease on the backup file that it creates. That lease is removed when the backup completes. (The process of dealing with leases in Azure Storage is described

    However, if you interrupt a backup (or network issues interrupt it for you), and this is a prolonged interruption, the lease can remain and when you try to overwrite that backup blob (or even delete it), you’ll see the error above.

    Now, the design feature that makes this a bit easier to deal with is that the BACKUP TO URL command always uses a well-known lease ID: BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2

    What is needed to delete it then, is a tool that can break leases, or to run a PowerShell  script as described in this article. You should also take this as yet another hint to learn about PowerShell if you haven’t done so already.

    Details about other aspects of BACKUP TO URL troubleshooting are given in this article.

  • Fix for SQL Server 2012 SP2 Data Corruption Issue Now Available

    Microsoft recently announced service pack (SP) 2 for SQL Server 2012. However, we were aware of a potential data corruption issue related to online index rebuilds, so we had been recommending that customers avoid installing it until the fix was available.

    That hotfix is now available: 

    Note that there is also a fix for SQL Server 2012 SP1 and for SQL Server 2014 RTM.

  • The Power Query story keeps getting better

    It seems like anyone that's talked to me this year has heard me going on and on about how Power Query is just the nicest new bit of tech that's come from Microsoft lately. We cover it as part of our one-day Power BI Core Skills training class and as part of day 2 in our BI Core Skills classes that we recently updated for SQL Server 2014. Out of all the technologies that people see during that week, Power Query is the one that seems to promote the biggest discussions.

    It's also a product that keeps getting improved constantly. Another new update for Power Query is out and you can find it here:

    Here is a video that shows what's been updated: 

    The blog post from the team that shows what's changed is here:

    For me, the one big thing that Power Query is now missing is a corporate story beyond Excel/Power BI. The queries that you create in Power Query are actually M language queries. This is a very capable language (unrelated to what the industry used to call the M language), that Microsoft have documented here:

    I would so love to be able to take the queries that I generate in Excel and paste them into an SSIS data source, or use them as a data source in SSAS tabular. Once those types of options appear (surely they must), this is going to be even more of a killer application. 

  • SQL Down Under Show 63 Jimmy May - Clustered Columnstore Indexes in SQL Server 2014

    It was great to catch up with Jimmy May recently. Jimmy is a principal database architect at Microsoft and was part of the SQL CAT team (customer advisory team) for quite a while.


    We recorded a new SQL Down Under podcast.


    In this podcast, Jimmy discusses his experiences with columnstore indexes and the new updatable clustered columnstore indexes.


    You'll find it here:



  • Easier to Implement SQL Server Availability Groups on Azure

    With each update to Azure, it’s been getting easier to implement options for SQL Server. One of the challenges for the teams that write documentation and training is this rate of change.

    For a long time, Availability Groups weren’t able to be implemented properly on Azure as there was a limit of one IP address per VM. That problem was fixed some months back.

    I recently needed to set up Availability Groups wholly within Azure. I followed the documentation here:

    (Tutorial for setting up Availability Groups in Azure) 

    and the documentation here:

    (Tutorial for adding a listener) 

    which has just been updated here:

    (Tutorial for adding a listener)

    As I was reading the listener tutorial, I noted that the tutorial used Powershell for creating the endpoint as there was no GUI based method for enabling Direct Server Return on the endpoint. I thought “I’m sure I’ve seen that”. And this is the challenge for the documentation. In the portal, when I was creating the endpoint, the following dialog appeared:


    This is wonderful in several ways. Not only has the option been added, the popup help text is actually very helpful, and also provides a link to details related to how to use it. (I’m a firm believer that help should first and foremost be helpful). Kudos to whoever implemented that.

    The one remaining real roadblock for Availability Groups in Azure was the use of them for disaster recovery. The VMs involved needed to be contained in the same VPN. That’s fine for local high availability and for read-only replicas for load balancing but it’s not OK for disaster recovery where we want a secondary VM to be in another datacenter in another region.

    That roadblock has also been removed now. The Azure team added support for interconnecting different regions via secure tunnels. This is documented here:

    This allows you to construct a VPN like this: (image from the article)



    In fact, with the site to site connectivity, you can also combine your on-premises systems as well: (image from the article)




    This is important news for those wanting to implement Availability Groups in Azure, with a need for DR. Support for this has been announced:

    And the guidance for using it has also already been updated here:

    Kudos to all involved in making this possible so quickly.

This Blog



No tags have been created or used yet.
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement