THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

The Bit Bucket (Greg Low): IDisposable

Ramblings of Greg Low (SQL Server MVP, MCM and Microsoft RD) - SQL Down Under

  • 3rd Party Tools: dbForge Studio for SQL Server

    I've been taking a look at some of the 3rd party tools for SQL Server. Today, I looked at DBForge Studio for SQL Server from the team at DevArt.

    Installation was smooth. I did find it odd that it defaults to SQL authentication, not to Windows but either works fine.

    I like the way they have followed the SQL Server Management Studio visual layout. That will make the product familiar to existing SQL Server Management Studio users.

    image

    I was keen to see what the database diagram tools are like. I found that the layouts generated where quite good, and certainly superior to the built-in SQL Server ones in SSMS. I didn't find any easy way to just add all tables to the diagram though. (That might just be me). One thing I did like was that it doesn't get confused when you have role playing dimensions. Multiple foreign key relationships between two tables display sensibly, unlike with the standard SQL Server version.

    It was pleasing to see a printing option in the diagramming tool.

    I found the database comparison tool worked quite well. There are a few UI things that surprised me (like when you add a new connection to a database, it doesn't select the one you just added by default) but generally it just worked as advertised, and the code that was generated looked ok.

    I used the SQL query editor and found the code formatting to be quite fast and while I didn't mind the style that it used by default, it wasn't obvious to me how to change the format. In Tools/Options I found things that talked about Profiles but I wasn't sure if that's what I needed. The help file pointed me in the right direction and I created a new profile. It's a bit odd that when you create a new profile, that it doesn't put you straight into editing the profile. At first I didn't know what I'd done. But as soon as I chose to edit it, I found that a very good range of options were available.

    When entering SQL code, the code completion options are quick but even though they are quite complete, one of the real challenges is in making them useful. Note in the following that while the options shown are correct, none are actually helpful:

    image

    The Query Profiler seemed to work quite well. I keep wondering when the version supplied with SQL Server will ever have options like finding the most expensive operators, etc. Now that it's deprecated, perhaps never but it's great to see the third party options like this one and like SQL Sentry's Plan Explorer having this functionality.

    I didn't do much with the reporting options as I use SQL Server Reporting Services.

    Overall, I was quite impressed with this product and given they have a free trial available, I think it's worth your time taking a look at it.

  • Entering a backslash when using UK keyboard layout on US keyboard

    This is more for my own reference than anything else but today I was needing to enter a backslash key while using a US keyboard but with UK keyboard settings.

    After trying pretty much every key on the keyboard in all combinations, I realised there was no key combination that would do this directly. Curiously it's a common problem and I found untold blog and forum entries that were not helpful. They basically said to change your input to US when using a US keyboard.

    I figured there is always a way of entering a character by its code and of course this works. If you have a keypad, you can enter <Alt>92 and you'll get a backslash. That's all I needed in this case.

  • Step one in producing Shared Access Signatures for SQL Server Data Files in Azure - Generating a SHA2 256 HMAC Hash using Native T-SQL

    One of the projects I have been working on as time permits is to construct a way to create a shared access signature using native T-SQL. Shared access signatures are needed when working with SQL Server data files in Azure.

    Mostly, DBAs would use a programmatic utility such as Azure Storage Explorer to generate these signatures.

    It is also straightforward to do so with SQL CLR based code but this is not enabled in all environments.

    So I set about trying to create the required signatures using native T-SQL. I’m not concerned about the slow cryptographic performance as this won’t be a regular operation.

    I found some excellent code from Ryan Malayter on github: https://gist.github.com/rmalayter/3130462 that does a SHA –1 or SHA – 512 hash. I’ve modified and enhanced it a bit so that it does SHA2-256 and performs the functions I need.

    After the function code, there is a set of test code that shows how the function satisfies the HMAC_SHA256 test vectors. Hope that helps someone. I’ll post more as I complete the next step in building a SAS generator.

    USE tempdb;

    GO

     

    ALTER FUNCTION dbo.HMAC_SHA256

    (

        @HashKey varbinary(8000),

        @ValueToHash varbinary(8000)

    )

    RETURNS binary(32)

    AS

    BEGIN

     

        -- Based on concept code from Ryan Malayter: https://gist.github.com/rmalayter/3130462

        DECLARE @k_ipad_partial bigint = CAST(0x3636363636363636 AS bigint);

        DECLARE @k_opad_partial bigint = CAST(0x5C5C5C5C5C5C5C5C AS bigint);

        

        IF LEN(@HashKey) > 64

        BEGIN

            SET @HashKey = CAST(HASHBYTES('SHA2_256', @HashKey) AS binary (64));

        END ELSE BEGIN

            SET @HashKey = CAST(@HashKey AS binary (64));

        END;

         

        DECLARE @k_ipad binary(64)

          = CAST((SUBSTRING(@HashKey, 1, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 9, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 17, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 25, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 33, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 41, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 49, 8) ^ @k_ipad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 57, 8) ^ @k_ipad_partial) AS binary(8));

       

       

        DECLARE @k_opad binary(64)

          = CAST((SUBSTRING(@HashKey, 1, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 9, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 17, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 25, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 33, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 41, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 49, 8) ^ @k_opad_partial) AS binary(8))

          + CAST((SUBSTRING(@HashKey, 57, 8) ^ @k_opad_partial) AS binary(8));

        

        RETURN HASHBYTES('SHA2_256', @k_opad + HASHBYTES('SHA2_256', @k_ipad + @ValueToHash));

    END;

    GO

     

    -- Test with RFC4231 test vectors

     

    DECLARE @KeyToUse varbinary(4000);

    DECLARE @ValueToHash varbinary(4000);

    DECLARE @HashedValue varbinary(32);

    DECLARE @ExpectedResult varbinary(32);

     

    -- Test 1

     

    SET @KeyToUse = 0x0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b;

    SET @ValueToHash = CAST('Hi There' AS varbinary(1000));

    SET @ExpectedResult = 0xb0344c61d8db38535ca8afceaf0bf12b881dc200c9833da726e9376c2e32cff7;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 1 passed';

    END ELSE BEGIN

        PRINT '>>Test 1 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

     

    -- Test 2

     

    SET @KeyToUse = CAST('Jefe' AS varbinary(4));

    SET @ValueToHash = CAST('what do ya want for nothing?' AS varbinary(1000));

    SET @ExpectedResult = 0x5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 2 passed';

    END ELSE BEGIN

        PRINT '>>Test 2 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

     

    -- Test 3

     

    SET @KeyToUse = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa;

    SET @ValueToHash = 0xdddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd;

    SET @ExpectedResult = 0x773ea91e36800e46854db8ebd09181a72959098b3ef8c122d9635514ced565fe;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 3 passed';

    END ELSE BEGIN

        PRINT '>>Test 3 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

     

    -- Test 4

     

    SET @KeyToUse = 0x0102030405060708090a0b0c0d0e0f10111213141516171819;

    SET @ValueToHash = 0xcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcdcd;

    SET @ExpectedResult = 0x82558a389a443c0ea4cc819899f2083a85f0faa3e578f8077a2e3ff46729665b;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 4 passed';

    END ELSE BEGIN

        PRINT '>>Test 4 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

     

    -- Test 5

     

    SET @KeyToUse = 0x0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c;

    SET @ValueToHash = CAST('Test With Truncation' AS varbinary(4000));

    SET @ExpectedResult = 0xa3b6167473100ee06e0c796c2955552b;

    SET @HashedValue = CONVERT(varbinary(16),dbo.HMAC_SHA256(@KeyToUse, @ValueToHash));

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 5 passed';

    END ELSE BEGIN

        PRINT '>>Test 5 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

     

    -- Test 6

     

    SET @KeyToUse = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa;

    SET @ValueToHash = CAST('Test Using Larger Than Block-Size Key - Hash Key First' AS varbinary(4000));

    SET @ExpectedResult = 0x60e431591ee0b67f0d8a26aacbf5b77f8e0bc6213728c5140546040f0ee37f54;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 6 passed';

    END ELSE BEGIN

        PRINT '>>Test 6 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

     

    -- Test 7

     

    SET @KeyToUse = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa;

    SET @ValueToHash = CAST('This is a test using a larger than block-size key and a larger than block-size data. The key needs to be hashed before being used by the HMAC algorithm.' AS varbinary(4000));

    SET @ExpectedResult = 0x9b09ffa71b942fcb27635fbcd5b0e944bfdc63644f0713938a7f51535c3a35e2;

    SET @HashedValue = dbo.HMAC_SHA256(@KeyToUse, @ValueToHash);

    IF @HashedValue = @ExpectedResult

    BEGIN

        PRINT '>>Test 7 passed';

    END ELSE BEGIN

        PRINT '>>Test 7 failed';

        PRINT 'Expected 0x' + CONVERT(varchar(1000), @ExpectedResult, 2);

        PRINT 'Received ' + COALESCE('0x' + CONVERT(varchar(1000), @HashedValue, 2),'NULL');

        PRINT ' ';

    END;

  • Solving Errors with Existing BLOB Leases When Using BACKUP TO URL

    BACKUP TO URL was introduced as an add-on in Cumulative Update 2 for SQL Server 2012 Service Pack 1 and as a built-in feature for SQL Server 2014. I previously described this in a blog post.

    We have been using this in a variety of ways from on-premises systems:

    For example, it is an easy way to distribute a backup of a database to a large number of systems. Imagine you have a chain of retail stores that needs product and other reference information updated regularly. You can keep this data in a separate database at the head office, back it up to an Azure Storage account, and have each store download it separately.  This has major bandwidth and reliability improvements over other solutions such as having each store maintain a VPN connection to the head office.

    As another example, we have clients who simply aren’t able to gain enough space on their SANs to keep enough local copies of their backups.

    The more common scenario though is to use it for backups from Azure Virtual Machines that are running SQL Server. Rather than performing a backup to a virtual machine virtual disk, we achieve better performance by bypassing the file-system on the virtual disk and backing up directly to a URL. Both end up in Azure storage but backing up directly to storage gives us both better performance, and can help to avoid the drive limit and size restrictions for virtual machines.

    Regardless of why you are using BACKUP TO URL, one of the problems that you are likely to run into at some point is the dreaded:

                    Msg 3271, Level 16, State 1, Line 60
                    A nonrecoverable I/O error occurred on file
                    “https://somestorageaccount.blob.core.windows.net/backups/somedatabase.bak: “ Backup
                    to URL received an exception from the remote endpoint. Exception Message: The remote
                    server returned an error: (412) There is currently a lease on the blob and no lease ID was
                    specified in the request...
                    Msg 3013, Level 16, State 1, Line 60
                    BACKUP DATABASE is terminating abnormally.


    Applications using Azure storage can take leases on files that are held in the storage containers. This avoids issues with other applications concurrently changing (or even deleting) files that the application needs. The BACKUP TO URL feature in SQL Server takes an infinite lease on the backup file that it creates. That lease is removed when the backup completes. (The process of dealing with leases in Azure Storage is described
    here).

    However, if you interrupt a backup (or network issues interrupt it for you), and this is a prolonged interruption, the lease can remain and when you try to overwrite that backup blob (or even delete it), you’ll see the error above.

    Now, the design feature that makes this a bit easier to deal with is that the BACKUP TO URL command always uses a well-known lease ID: BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2

    What is needed to delete it then, is a tool that can break leases, or to run a PowerShell  script as described in this article. You should also take this as yet another hint to learn about PowerShell if you haven’t done so already.

    Details about other aspects of BACKUP TO URL troubleshooting are given in this article.

  • Fix for SQL Server 2012 SP2 Data Corruption Issue Now Available

    Microsoft recently announced service pack (SP) 2 for SQL Server 2012. However, we were aware of a potential data corruption issue related to online index rebuilds, so we had been recommending that customers avoid installing it until the fix was available.

    That hotfix is now available: http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=2969896&kbln=en-us 

    Note that there is also a fix for SQL Server 2012 SP1 and for SQL Server 2014 RTM.

  • The Power Query story keeps getting better

    It seems like anyone that's talked to me this year has heard me going on and on about how Power Query is just the nicest new bit of tech that's come from Microsoft lately. We cover it as part of our one-day Power BI Core Skills training class and as part of day 2 in our BI Core Skills classes that we recently updated for SQL Server 2014. Out of all the technologies that people see during that week, Power Query is the one that seems to promote the biggest discussions.

    It's also a product that keeps getting improved constantly. Another new update for Power Query is out and you can find it here: http://www.microsoft.com/en-us/download/details.aspx?id=39379&WT.mc_id=Blog_PBI_Announce_DI

    Here is a video that shows what's been updated: https://www.youtube.com/watch?v=Q9boOzu298Q 

    The blog post from the team that shows what's changed is here: http://blogs.msdn.com/b/powerbi/archive/2014/06/26/6-new-updates-in-power-query-june-2014.aspx

    For me, the one big thing that Power Query is now missing is a corporate story beyond Excel/Power BI. The queries that you create in Power Query are actually M language queries. This is a very capable language (unrelated to what the industry used to call the M language), that Microsoft have documented here: http://go.microsoft.com/fwlink/?LinkID=320633

    I would so love to be able to take the queries that I generate in Excel and paste them into an SSIS data source, or use them as a data source in SSAS tabular. Once those types of options appear (surely they must), this is going to be even more of a killer application. 

  • SQL Down Under Show 63 Jimmy May - Clustered Columnstore Indexes in SQL Server 2014

    It was great to catch up with Jimmy May recently. Jimmy is a principal database architect at Microsoft and was part of the SQL CAT team (customer advisory team) for quite a while.

     

    We recorded a new SQL Down Under podcast.

     

    In this podcast, Jimmy discusses his experiences with columnstore indexes and the new updatable clustered columnstore indexes.

     

    You'll find it here: http://www.sqldownunder.com/Podcasts

     

    Enjoy! 

  • Easier to Implement SQL Server Availability Groups on Azure

    With each update to Azure, it’s been getting easier to implement options for SQL Server. One of the challenges for the teams that write documentation and training is this rate of change.

    For a long time, Availability Groups weren’t able to be implemented properly on Azure as there was a limit of one IP address per VM. That problem was fixed some months back.

    I recently needed to set up Availability Groups wholly within Azure. I followed the documentation here:

    (Tutorial for setting up Availability Groups in Azure)  http://msdn.microsoft.com/en-us/library/dn249504.aspx 

    and the documentation here:

    (Tutorial for adding a listener)  http://msdn.microsoft.com/en-us/library/dn376546.aspx 

    which has just been updated here:

    (Tutorial for adding a listener)  http://msdn.microsoft.com/en-us/library/dn425027.aspx

    As I was reading the listener tutorial, I noted that the tutorial used Powershell for creating the endpoint as there was no GUI based method for enabling Direct Server Return on the endpoint. I thought “I’m sure I’ve seen that”. And this is the challenge for the documentation. In the portal, when I was creating the endpoint, the following dialog appeared:

    image

    This is wonderful in several ways. Not only has the option been added, the popup help text is actually very helpful, and also provides a link to details related to how to use it. (I’m a firm believer that help should first and foremost be helpful). Kudos to whoever implemented that.

    The one remaining real roadblock for Availability Groups in Azure was the use of them for disaster recovery. The VMs involved needed to be contained in the same VPN. That’s fine for local high availability and for read-only replicas for load balancing but it’s not OK for disaster recovery where we want a secondary VM to be in another datacenter in another region.

    That roadblock has also been removed now. The Azure team added support for interconnecting different regions via secure tunnels. This is documented here:

    http://msdn.microsoft.com/en-us/library/azure/dn690122.aspx?WT.mc_id=Blog_SQL_General_DI

    This allows you to construct a VPN like this: (image from the article)

     

    image

    In fact, with the site to site connectivity, you can also combine your on-premises systems as well: (image from the article)

    image

     

     

    This is important news for those wanting to implement Availability Groups in Azure, with a need for DR. Support for this has been announced:

    http://blogs.technet.com/b/dataplatforminsider/archive/2014/06/19/sql-server-alwayson-availability-groups-supported-between-microsoft-azure-regions.aspx

    And the guidance for using it has also already been updated here:

    http://msdn.microsoft.com/en-us/library/jj870962.aspx?WT.mc_id=Blog_SQL_General_DI

    Kudos to all involved in making this possible so quickly.

  • Azure now in the leader quadrant for IaaS from Gartner

    Gartner tends to publish magic quadrant leader boards related to a variety of technology areas.

    It was interesting to note that the latest leader board has Azure moved up into the Leader quadrant. The only other player in that quadrant is Amazon. That's a big step up for the team, given the IaaS business really only went to GA in April last year.

    You'll find details here: Gartner Report

     

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects).

    To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn.

    The good news is that they’re back and you can get the latest version from here:

    http://www.microsoft.com/en-us/download/details.aspx?id=42313

  • BI Project Templates for Visual Studio 2013 are also now available

    One quiet addition to the tools that have been released recently was the set of project templates for creating BI projects (Integration Services, Analysis Services, Reporting Services) within Visual Studio 2013.

    It is really great to be able to use the same version of these as we are using for other code development in Visual Studio.

    Thanks to the team for updating them so quickly!

    You’ll find them here:

    http://www.microsoft.com/en-us/download/details.aspx?id=42313

  • SDU Show #62: Paul Larson - SQL Server 2014 Underlying Technologies

    I had the distinct honour (honor) this week of recording a new SQL Down Under podcast with Paul Larson.

     

    Paul is a principal researcher at Microsoft and is one of the keen minds involved in the technologies behind the in-memory tables (Hekaton) and clustered columnstore indexes in SQL Server 2014.

     

    In this podcast, Paul explains his role, and discusses how these core enhancements in SQL Server 2014 have been implemented.

     

    You'll find it here: http://www.sqldownunder.com/Podcasts

     

    Enjoy! 

  • Adding Good Favicons to your MVC Based Website

    When you visit websites, you may or may not have noticed the small icons that appear in the title bar areas. These are called Favicons. Here’s an example of one in Chrome:

    SNAGHTML451ec3

    In Internet Explorer, they appear in several places:

    image     image

    They are only a small visible aspect of your website but they greatly improve how professional the site looks in a browser. In IE, they also appear when you open a new page:

    image

    So, how do you create these, and how do you add them to your site?

    I had tried creating them using a variety of icon editors including the excellent IcoFX. However, I could never create really good looking Favicons as I must be a little art-challenged. I’m sure it’s a left-brain/right-brain thing. I’d try to draw them but I’d end up giving up and using something boring like this:

    image

    It’s quite hard to create a small version of your company’s logo, etc.

    On one of the local mailing lists, our buddy Andrew Coates from Microsoft, pointed out one of their sites that I’d never seen before. It’s awesome for creating favicons by scaling existing images. Here’s how I created the one in the images above:

    First Step – Transparent PNG

    The first step was to make sure I had a transparent png file that contained the logo that I wanted to use. Even though you could use a white background, it looks much better if it’s transparent. (Skip this if you already have one). To create that, I used Paint.Net (yet another awesome free tool). I loaded the existing logo that had a white background like so:

    image

    In many of these tools, it’s not so obvious how to make the background transparent. In Paint.Net, all I needed to do was to select the paint bucket as a tool, then use the “More” option in the color palette, and move the Transparency-Alpha setting fully to the left:

    image

    Notice that the primary color now looks transparent. Then I just used the paint bucket tool to change the white to transparent.

    image

    Then I saved the file.

    Step 2 – Create the Favicon

    The site that Andrew recommended was http://ie.microsoft.com/testdrive/Browser/IconEditor/Default.html. Navigate to that site. We then want the icon editor:

    image

    Next choose to Import:

    image

    Use the Upload button to bring in the transparent png that was created earlier, and then move and resize the guide window until you get as much of the logo that you want displayed:

    image

    When it looks good, make sure that all four icon sizes are selected via the checkboxes and then click OK. You’ll then see your logo in the main window, and you can choose to export it:

    image

    Save the file that is downloaded as favicon.ico.

    Step 3 – Add the Favicon to your MVC Project

    Copy the icon file into the root  folder in your MVC project, and make sure you have included the file in the project. James Green pointed out to me that many older browsers still go looking for the specific filename favicon.ico and in the root folder of the website, so it sounds like good advice to always place them there.

    Finally, add a reference to the icon file in your _Layout.cshtml file in the head section:

    image

    And you’re done. Enjoy!

  • New SQL Server Spatial Course

    I’ve always thought that the support for spatial data types that was added in SQL Server 2008 was one of the best parts of that release.

    In SQL Server 2012, a great set of enhancements were made to the spatial support.

    We’ve had lots of interest lately in SQL Server spatial so we’ve been working on a new course that targets it specifically. We’re pleased that we’ve now added a brand new one day SQL Server Spatial Core Skills course to our list of regularly scheduled courses. It covers a wide variety of topics from understanding the data types, to loading data, geolocation, indexing, etc. It also covers the enhancements that were made in 2012.

    The first public schedule for this course is April 14th in Melbourne. Details are here: http://www.sqldownunder.com/Training/Courses/21

  • Fix: Incorrect status output in the SSIS Data Quality transform

    I hear comments from people all the time that say it’s not worth posting details on the Connect website as they don’t get actioned anyway. While it can seem a frustrating process, and often take quite a while, improvements do come from there.

    At a SQL Saturday event in Brisbane, I was speaking to an attendee (Ian Roberts) who was complaining that DQS was returning what he perceived as the wrong status. More importantly, the results from using the DQS client differed from using the SSIS transform. He described it this way:

    He set up a domain for Australian States and one for Australian suburbs, based on files that he had downloaded from the Australia Post website. He then created a Data Quality Project to test this. In that tooling, the results were as expected. Hobarrt got transformed to Hobart; Victoria got transformed to VIC; Melburn, New York and NWS all got tagged as New as they couldn't be found. With the Record Status, New seemed to override Correct, and Corrected seems to override them all.

    However, in the DQS Client Transform, since Correct overrides New, unless every field was in error, the record would go to the Corrected output.

    The problem with this is that in the SSIS conditional split transform, you should just be able to test the Record_Status to determine what to do with the record, rather than look at the status for each field. Looking at the status for each individual field, it would make the conditional split transform unwieldy.

    The other real issue is that the two tools should provide the same outcome. You should be able to look at the Record_Status, direct correct and incorrect records to their appropriate destination, then deal with the incorrect ones after the fact. Having the field status in the output would make application error reporting more meaningful as you could nominate the actual field in error. The main issue is that Correct gazumps New in SSIS, but New gazumps Correct in the DQS Client. That’s not appropriate.

    In one of my podcasts for SQL Down Under, I had interviewed Gadi Peleg from the DQS team and Gadi agreed that it looked incorrect. He agreed that the output should be:

    •    If 1 of the values is Invalid – the record is invalid
    •    If 1 of the values is corrected – the record is corrected
    •    If 1 of the values if New – record is new
    •    If all values are Correct – only then record is correct

    Gadi encouraged me to log it on the Connect website.

    Today I got an email to say it’s been fixed. That’s a great outcome, although the update doesn’t indicate in which version of service pack the change will apply:

    http://connect.microsoft.com/SQLServer/feedback/details/779183/dqs-client-in-ssis-returns-invalid-status

This Blog

Syndication

Tags

No tags have been created or used yet.
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement