THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a freelance data mangler in London

  • Extended property support in SSDT (10 days of SSDT – Day 5)

    SQL Server Data Tools (SSDT) was released in Spring 2012 and I have been using it to build SQL Server databases nearly every day since then. In that time I’ve learnt a lot about how to make best use of the tool and I want to share that experience in a new series of blog posts called “10 days of SSDT”. I shall be publishing a different blog post every day for 10 days, each day revealing something that you may not know about SSDT. I hope you enjoy the series and contribute via the comments!

    Extended properties are a feature of SQL Server that have existed since SQL Server 2000 although I assume that you probably don’t use them as I rarely see them being used on my travels. Nonetheless extended properties are a useful way of supplying extra information about your database. Most often they are used for documentation purposes however I have also used them to drive workflow in an ETL process (e.g. I toggle the loading of a table by changing one of its extended properties). SQL Server Data Tools (SSDT) includes good support for extended properties and in this blog post I’ll cover what support it provides.

    First is SSDT’s table designer. Notice in this screenshot that we have the option to choose what information is displayed per column and that “Description” is one of those chosen columns:

    SNAGHTML705b570

    If I enter a value into that Description for a column then check out what happens:

    SNAGHTML714660f

    SSDT creates the DDL that creates an extended property named MS_Description on the column in question. You can create your own extended properties as well of course but unfortunately only extended properties called MS_Description will get surfaced in the UI (and even then only for columns). If you wish to create your own then simply create them as per the code above, using GO to separate them if you have more than one defined in the same script.

    One nice aspect to this support is that SSDT treats extended properties as schema rather than data hence if I change the name of the column like so:

    image

    then an error appears, along with a red squiggly, informing you of an invalid reference in the extended property definition. Note that refactoring is supported for extended properties so if you change the name of a referenced object using refactor rename then the extended property will get updated accordingly.

    At publish time SSDT will take care of creating the extended property if it doesn’t exist or modifying it if it does, just like any other schema object. Here’s the output from publishing my project to LocalDB:

    image

    image

    OK, that’s pretty much everything there is to know about extended property support in SSDT. Any questions? Let me know.

    @Jamiet

    If you want to learn more about SSDT then come along to my training course Introduction to SQL Server Data Tools in association with Technitrain on 23rd/24th February 2015 in London. There is a discount if you register before the end of 2014.

    Earlier posts in this series:

  • Table designer (10 days of SSDT – Day 4)

    SQL Server Data Tools (SSDT) was released in Spring 2012 and I have been using it to build SQL Server databases nearly every day since then. In that time I’ve learnt a lot about how to make best use of the tool and I want to share that experience in a new series of blog posts called “10 days of SSDT”. I shall be publishing a different blog post every day for 10 days, each day revealing something that you may not know about SSDT. I hope you enjoy the series and contribute via the comments!

    Every database contains tables therefore not surprisingly one of the most fundamental features within SSDT is the table designer which I think of as being split into three areas, Script, Columns & Child objects:

    image

    Each table definition is stored as DDL in a text file with a .sql suffix however SSDT understands when a file contains a table definition and hence by opening up such a file the table designer will (by default) get displayed (you do have the option to right-click on a file and just show the raw script). The DDL for the table is shown in the script section of the table designer while the Columns and Child objects section are a graphical representation of that DDL. The significance of having a graphical UI over the script is that the two stay in sync with each other, any change you make in one will get reflected in the other. For example, if I were to check the ‘Allow Nulls’ checkbox for [Id] field the NOT NULL in the script would immediately change to NULL. If I define an index in my script then that will appear under Indexes in the Child objects section.

    Its intuitive to think that the Table Designer displays DDL from only one file but actually that’s not the case, its a lot smarter than that. Take the following example where I have a table defined in one file (Product.table.sql) and the primary key for that table defined in another file (Product.PrimaryKey.sql):

    image

    Notice how the primary key [pkdboProduct] is still displayed in the Child objects section even though it is defined in another file. Notice also how I can choose which file the script section displays:

    image

    That’s the basic functionality of the Table Designer.

    @Jamiet

    If you want to learn more about SSDT then come along to my training course Introduction to SQL Server Data Tools in association with Technitrain on 23rd/24th February 2015 in London. There is a discount if you register before the end of 2014.

    Earlier posts in this series:

  • What is a project? (10 days of SSDT – Day 3)

    SQL Server Data Tools (SSDT) was released in Spring 2012 and I have been using it to build SQL Server databases nearly every day since then. In that time I’ve learnt a lot about how to make best use of the tool and I want to share that experience in a new series of blog posts called “10 days of SSDT”. I shall be publishing a different blog post every day for 10 days, each day revealing something that you may not know about SSDT. I hope you enjoy the series and contribute via the comments!

    In Day 1 – Online and offline I explained how the offline (aka disconnected) development experience in SSDT makes use of the Solution Explorer pane inside Visual Studio. In the screenshot below I show one Visual Studio 2012 solution containing two projects, both are SSDT projects:

    image

    An SSDT project pertains to a SQL Server database that either already exists or you may want to create in the future. It is a container for lots of DDL statements that define all of the objects that do/will exist in that database. That’s an important point so I’ll state it again, the project contains DDL statements. It does not contain procedural code that defines how to create objects, for example you are not going to write anything like this in an SSDT project:

    IF EXISTS (SELECT 1 FROM sys.tables WHERE name = 'Customer')
    BEGIN
       CREATE TABLE
    Customer(
          
    CustomerId INT
      
    ,   CustomerName VARCHAR(50)
       )
      
    PRINT 'Created table Customer';
    END

    In an SSDT project you only define DDL, so for the table defined above you only need the CREATE TABLE statement:

       CREATE TABLE Customer(
          
    CustomerId INT
      
    ,   CustomerName VARCHAR(50)
       )

    This is a very important point regarding the declarative nature of SSDT.

    You are free to organise your DDL scripts within an SSDT project however you like. For a very simple database you may just want to place all your scripts under the root of the project however for something more complex you may want to come up with a folder structure that reflects how you like to organise your code, a folder per schema is a common convention. In the following screenshot you can see that I have shown both approaches:

    image

    Each SSDT project contains a series of properties that govern the behaviour of the SQL Server database within it. These properties are accessed by selecting a project within Solution Explorer, right-clicking and selecting properties (or press ALT+Enter), then from the resulting Properties page selecting “Database Settings”.

    SNAGHTML1499a99e

    As you can see here is where you should set pertinent properties of your database such as collation, various SET options, and many other things.

    So in short, an SSDT project pertains to a SQL Server database and all the objects within it. Later we’ll see how SSDT turns a project into an actual database via a process called publishing. As always, if you have any questions please put them in the comments section below.

    @Jamiet

    If you want to learn more about SSDT then come along to my training course Introduction to SQL Server Data Tools in association with Technitrain on 23rd/24th February 2015 in London. There is a discount if you register before the end of 2014.

    Earlier posts in this series:

  • Day 2 – DAC Framework (10 days of SSDT)

    SQL Server Data Tools (SSDT) was released in Spring 2012 and I have been using it to build SQL Server databases nearly every day since then. In that time I’ve learnt a lot about how to make best use of the tool and I want to share that experience in a new series of blog posts called “10 days of SSDT”. I shall be publishing a different blog post every day for 10 days, each day revealing something that you may not know about SSDT. I hope you enjoy the series and contribute via the comments!

    When you install SSDT you also install something called the Data Tier Application Framework, or DAC Framework for short (sometimes also further shortened to DacFx). You can see DacFx in your list of installed programs:

    SNAGHTML1602e68b

    DacFX is largely invisible to you but is vital if you are using SSDT. This is the software component that contains the smarts for deploying the code that you build using SSDT. It gets installed along with SSDT but also when you install SQL Server itself. In fact if you have ever used a dacpac (more on those later) then you have used DacFx. One important component of DacFx is a command-line tool called SqlPackage.exe which enables you to carry out many SSDT functions from the command-line mainly it allows you to deploy and manage dacpacs from the command-line.

    There’s not much to know about DacFx except that it exists and what it exists for. Know though that it is a vital component of SSDT and much of the “smarts” are contained within it.

    @Jamiet

    If you want to learn more about SSDT then come along to my training course Introduction to SQL Server Data Tools in association with Technitrain on 23rd/24th February 2015 in London. There is a discount if you register before the end of 2014.

    Earlier posts in this series:

  • Azure Data Factory (ADF) leaks – Microsoft’s ETL-in-the-cloud offering is coming

    I wouldn’t normally put up a blog post simply to draw attention to someone else’s blog post but in this case, given my heritage with Microsoft’s existing ETL tool (SSIS) and my long interest in all things to do with cloud and data integration I thought this was worth mentioning.

    Mary-Jo Foley has posted Microsoft readies new Azure Data Factory service in which she details a forthcoming service that Microsoft are bringing out on their Azure platform:

    That service, known as Azure Data Factory, is meant to allow users to consolidate a number of Microsoft Azure services, along with select third-party services, into a more easily-managed analytics and data pipeline, according to sources who requested anonymity.

    That is what i would refer to as a cloud-based ETL tool, the like of which I have been speculating about for some time. I’m looking forward to playing with this for real.

    @Jamiet 

    P.S. To whomever it may concern, it wasn’t me that leaked it OK Smile

  • Day 1 – Online and offline (10 days of SSDT)

    SQL Server Data Tools (SSDT) was released in Spring 2012 and I have been using it to build SQL Server databases nearly every day since then. In that time I’ve learnt a lot about how to make best use of the tool and I want to share that experience in a new series of blog posts called “10 days of SSDT”. I shall be publishing a different blog post every day for 10 days, each day revealing something that you may not know about SSDT. I hope you enjoy the series and contribute via the comments!

    Its probably only fair to point out that this was intended to be “30 days of SSDT” but I laboured over writing the first 10 and then my second daughter turned up and since then time has been somewhat at a premium. Hence you’re getting the first 10 Smile. Hopefully one day I’ll find time to write the other 20!

    SSDT is a tool to aid with developing databases. It is not intended for administering databases and hence is not intended as a replacement for SQL Server Management Studio (SSMS). Within that developer remit it supports two distinct ways of working, online and offline. Those two distinct usage scenarios are most obvious with SQL Server Object Explorer (or SSOX for short).  SSOX is a pane within Visual Studio that is made visible from from the View menu or by using the keyboard shortcut CTRL+\, CTRL+S and it looks something like this:

    image

    There are two top-level nodes:

    • SQL Server – For the online usage scenario. This node is a collection of SQL Server instances that can be expanded so you can examine what’s in them. I think of this node as a hybrid of the Object Explorer and Registered Servers panes in (SSMS).
    • Projects – offline projects

    Online

    The online usage scenario is quite simply for working with SQL Server databases that already exist. In the screenshot above notice that I have two SQL Server instances listed, (localdb)\Projects (more about that later in the series) and mhknbn2kdz.database.windows.net which just happens to be the server that hosts AdventureWorks on Azure. From here I can browse through the objects on those servers and interrogate them just as I would in SSMS. Within SSDT/SSOX however there is one important difference, when I make changes to the database I am not affecting the database directly, I am affecting an in-memory model of the database. This approach has two important advantages,

    • I can view the impact of a change (e.g. if I change a column name will it break any views that reference it) without actually making the change
    • I can make many changes to the model and then push them into my actual database en masse and SSDT will work out the best way of doing that. I don’t have to write the code that deploys my code.

    Working online within Visual Studio is often referred to as connected development.

    Offline

    The offline usage scenario is all about writing new database “stuff” that you want to deploy. Ostensibly that doesn’t sound too different from the online scenario but the key here is that there is no requirement for a full-blown SQL Server instance to exist. In fact, you can download and install SSDT today (its free) and start building SQL Server databases without having to install SQL Server. Working offline within Visual Studio is often referred to as disconnected development.

    When working offline you interact with SSOX as I previously mentioned but you will also find yourself using a pane inside Visual Studio called “Solution Explorer”. As anyone who has used Visual Studio will tell you Solution Explorer is where your code lives and that’s no different for SSDT. A solution is a collection of projects and each SSDT project pertains to a SQL Server database.

    image

    Any questions so far? Please put them in the comments section below.

    @Jamiet

    If you want to learn more about SSDT then come along to my training course Introduction to SQL Server Data Tools in association with Technitrain on 23rd/24th February 2015 in London. There is a discount if you register before the end of 2014.

  • Create a BLOB in Azure BLOB Storage using the REST API and Powershell

    Have you ever wanted to use Powershell to call the Azure BLOB Storage API in order to create a new BLOB? You haven’t? What have you been doing all your life? (!!!)

    Well, if your life ever does reach the same giddy heights as mine and you find that you need this ability added to your personal armoury then this blog post should be a massive help. Figuring this out is a battle with deciphering one particularly gnarly MSDN article: Authentication for the Azure Storage Services, or you can just adapt the script linked to below!

    Behold, here is the “Hello World” of using Powershell to create a BLOB in Azure BLOB Storage using the REST API. Take the script, add your storage account name and storage key where required, and you’re good to go! I’ve made the script available as a Github Gist: https://gist.github.com/jamiekt/2efdd38c1b26d78cc6cd. For those that like to see pretty colour coding, here’s a screenshot:

    SNAGHTML3549faca

    One tip – I encountered an issue where the script file had gotten stored with a Byte Order Mark (BOM) and this was causing an error due to an incorrect Content-Length header. If you have this problem then Notepad++ can remove the BOM for you:

    SNAGHTML353394b7

    Thank you to Gaurav Mantri for his massive massive help with this.

    @Jamiet

  • Updating hadoop user password for HDInsight Emulator

    I’ve been doing alot of work with hadoop recently, specifically Microsoft’s hadoop distribution, HDInsight. Microsoft have an HDInsight Emulator available that makes it possible for one to run HDInsight on their Windows workstation and on the 29th August 2014 they released a new version of the Emulator that includes the recently released HBase feature (get the emulator from http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT).

    After I installed it I noticed that none of the services associated with the emulator were started and this was due to a logon failure:

    image

    All of those services are configured to start as user .\hadoop:

    image

    I happen to be using Windows 8 and I’ll freely admit that I don’t have the foggiest idea how Windows 8 messes with users and groups because user “hadoop” is not visible in “Control Panel\User Accounts and Family Safety\User Accounts\Manage Accounts”. However, it is possible to discover information from the command line using “net user hadoop”:

    image

    Today is 3rd September 2014, notice that the password expired nearly five months ago (which I suspect is shortly after I first installed the emulator). Personally I think that given this is only for use in an emulator there is no need for the password to ever expire and I’ve fed that back to Microsoft.

    Changing the password is easy when you know how (though it took me a while to figure it out – hence this blog post), run a command prompt as administrator and type

    “net user hadoop *”

    You will be prompted to enter a new password:

    image

    After which time the services will start successfully.

    image

    Something to be aware of!

    @Jamiet

  • Time your commands [Powershell nugget]

    When using Powershell I’ve always found it useful to be able to time how long commands/scripts are running for, invaluable when running long-running scripts. I used to do something similar to this in all my scripts:

    image

    which is rather crude. It also has the disadvantage that I have to remember to actually put this into my scripts. What I’d prefer would be a way for Powershell to always tell me how long it takes my commands/scripts to run and thanks to my old Conchango colleague Stuart Preston I now have a way. Using Stuart’s code that he has put up in a gist one can have Powershell output this information every time a command is executed:

    image

    Note that the current location is still present in the title bar (Stuart’s code does that too) and time elapsed since the last command is displayed. Perfect! Simply put Stuart’s code into your profile and you’re done.

    A simple little trick but one which I think its worth calling out in a blog post!

    @Jamiet

  • Don’t be a dinosaur. Use Calendar Tree!

    If one spends long enough in my company one will likely eventually have to listen to me bark on about subscribable calendars. I was banging on about them way back in 2009, I’ve cajoled SQLBits into providing one, provided one myself for the World Cup, and opined that they could be transformative for the delivery of BI. I believe subscribable calendars can change the world but have never been good at elucidating why I thought so, for that reason I always direct people to read a blog by Scott Adams (yes, the guy who draws Dilbert) entitled Calendar as Filter. In that blog post Scott writes:

    I think the family calendar is the organizing principle into which all external information should flow. I want the kids' school schedules for sports and plays and even lunch choices to automatically flow into the home calendar.

    Everything you do has a time dimension. If you are looking for a new home, the open houses are on certain dates, and certain houses that fit your needs are open at certain times. If you are shopping for some particular good, you often need to know the store hours. Your calendar needs to know your shopping list and preferences so it can suggest good times to do certain things

    I think the biggest software revolution of the future is that the calendar will be the organizing filter for most of the information flowing into your life. You think you are bombarded with too much information every day, but in reality it is just the timing of the information that is wrong. Once the calendar becomes the organizing paradigm and filter, it won't seem as if there is so much.

    I wholly agree and hence was delighted to discover (via the Hanselminutes podcast) that Scott has a startup called CalendarTree.com whose raison d’etre is to solve this very problem. What better way to describe a Scott Adams startup than with a Scott Adams comic:

    image

    I implore you to check out Calendar Tree and make the world a tiny bit better by using it to share any information that has a time dimension to it. Don’t be a dinosaur, use Calendar tree!

    @Jamiet

  • Kill all the project files!

    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments):

    A file that lists files is really redundant when the OS already does this

    Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions.

    On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple.

    Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system:

    image

    This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it!

    Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file?

    In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple!

    So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me?

    @Jamiet

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it.


    A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project.

    The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server.


    In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT:

    This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE.
    http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/

    Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team:

    The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine.

    I, like many others, would rather not have to install full blown Visual Studio and so I asked:

    Is there any chance you'll ever support any of these scenarios:

    • Installation of all build/deploy pre-requisites without installing the VS shell?
    • TFS shipping with all of the pre-requisites for doing SSDT project build/deploys
    • 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys

    I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that?

    Kevin replied again:

    The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc.

    Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc.

    What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that.

    Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you.

    So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead.

    @Jamiet

  • POST rows to HBase REST API on HDInsight using Powershell

    I’ve been very quiet on the blogging front of late. There are a few reasons for that but one of the main ones is that I’ve spent the past three months on a new gig immersing myself in Hadoop, primarily in Microsoft’s Hadoop offering called HDInsight. I’ve got a tonne of learnings that I want to share at some point but in this blog post I’ll start with a handy little script that I put together yesterday.

    I’m using a tool in the Hadoop ecosystem called HBase which was made available on HDInsight in preview form about a month ago. HBase is a NoSQL solution intended to provide very very fast access to data and my colleagues and I think it might be well suited for a problem we’re currently architecting a solution for. In order to evaluate HBase we wanted to shove lots of meaningless data into it and in the world of HDInsight the means of communicating with your HDInsight cluster is Powershell. Hence I’ve written a Powershell script that will use HBase’s REST API to create a table and insert random data into it. Likely if you’ve googled this post then you’re already familiar with Hadoop, HDInsight, REST, Powershell, HBase, column families, cells, rowkeys and other associated jargon so I won’t cover any of those, what is important is the format of the XML payload that has to get POSTed/PUTted up to the REST API. That payload looks like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <CellSet>
      <Row key="myrowkey">
        <Cell column="columnfamily:column1">somevalue</Cell>
        <Cell column="columnfamily:column2">anothervalue</Cell>
      </Row>
    </CellSet>

    The payload can contain as many cells as you like. When the payload gets POSTed/PUTted the values therein need to be base64 encoded but don’t worry, the script I’m sharing herein takes care of all that for you. The script will also create the table for you. The data that gets inserted is totally meaningless and is also identical for each row, modifying the script to insert something meaningful is an exercise for the reader.

    Another nicety of this script is that it uses Invoke-RestMethod which is built into Powershell 4. You don’t need to install other Powershell modules, nothing Azure specific. If you have Powershell 4 you’re good to go!

    Embedding code on this blog site is ugly so I’ve made it available on my OneDrive: CreateHBaseTableAndPopulateWithData.ps1 Screenshot below gives you an idea of what’s going on here.

    Hope this helps!

    @Jamiet

    UPDATE. I’ve posted a newer script CreateHBaseTableAndPopulateWithDataQuickly.ps1 which loads data in much quicker. This one sends multiple rows in each POST and hence I was able to insert 13.97m rows in 3 hours and 37 minutes which, given latency to the datacentre and that this was over a RESTful API, isn’t too bad. The previous version of the script did singleton inserts and hence would have taken weeks to insert that much data.

    The number of POSTs and the number of rows in each POST are configurable.

    SNAGHTML14e9116b

  • Did you know documentation is built-in to usp_ssiscatalog?

    I am still working apace on updates to my open source project SSISReportingPack, specifically I am working on improvements to usp_ssiscatalog which is a stored procedure that eases the querying and exploration of the data in the SSIS Catalog.

    In this blog post I want to share a titbit of information about usp_ssiscatalog, that all the actions that you can take when you execute usp_ssiscatalog are documented within the stored procedure itself. For example if you simply execute

    EXEC usp_ssiscatalog @action='exec'

    in SSMS then switch over to the messages tab you will see some information about the action:

    SNAGHTMLa9937ec

    OK, that’s kinda cool. But what if you only want to see the documentation and don’t actually want any action to take place. Well you can do that too using the @show_docs_only parameter like so:

    EXEC dbo.usp_ssiscatalog @a='exec',@show_docs_only=1;

    That will only show the documentation. Wanna read all of the documentation? That’s simply:

    EXEC dbo.usp_ssiscatalog @a='exec',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='execs',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='configure',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_created',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_running',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_canceled',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_failed',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_pending',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_ended_unexpectedly',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_succeeded',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_stopping',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_completed',@show_docs_only=1;

    I hope that comes in useful for you sometime. Have fun exploring the documentation on usp_ssiscatalog. If you think the documentation can be improved please do let me know.

    @jamiet

  • Cortana and Schema.org

    Some definitions:

    • Cortana – Microsoft’s digital personal assistant, competing with Apple’s Siri and Google Now.
      “She gets to know you by learning your interests over time. She looks out for you, providing proactive, useful recommendations. And Cortana keeps you closer to the people and things you care about most, by keeping track of all that matters.” - http://www.windowsphone.com/en-US/features-8-1#Cortana
    • Schema.org – “a collection of schemas that used to markup HTML pages, and that can also be used for structured data interoperability Search engines rely on this markup to improve the display of search results” https://schema.org/.
      I think of schema.org as machine-readable metadata on a web page. This has existed for years in various guises (e.g. microformats), schema.org is a company-independent initiative to standardise those schemas.

    One of the interesting features that Cortana provides is the ability to parse your email in order to discover flight information and then keep you up to date with information about that flight. Upon reading MSDN article Sending flight information to Microsoft Cortana with contextual awareness it transpires that this feature is entirely dependent upon the email containing schema.org markup:

    Microsoft Cortana interprets schema.org markup in e-mails to extract airline flight reservation data
    http://msdn.microsoft.com/en-us/library/dn632191.aspx

    We all unknowingly make use use of http://schema.org markup all the time (every time you use Google for example, the share contract in Windows 8 & Windows Phone 8.1 can also leverage schema.org) however this is the first time I’ve been made aware of a user experience that is totally dependent on it.

    I find this fascinating, I’ve long had an interest in structured metadata on the web (I’ve been banging on about it in blog posts for the last 10 years) and now my mind is in overdrive thinking of other scenarios that could leverage this. Taking a look at the documentation shows us that schema.org provides definitions of a multitude of things (in fact Thing is the base class from which everything else is derived), some examples:

    Its not hard to envisage that Cortana might one day see mention of a music track in an email and offer the ability to play it for you, or see contact information and offer to add that to your contact list. Where this gets potentially even more interesting (to me, anyway) is when we consider that Cortana is already extensible by 3rd party apps – I foresee that in the future 3rd party apps will have the ability to subscribe to a particular schema and have Cortana notify them when it comes across an instance of that schema (imagine a Spotify app being informed that an email just arrived with information about a new music release and then offering to play that for you).

    Perhaps this 3rd party extensibility could work the other way too by presenting other data sources to Cortana. For example, if Twitter ever get round to implementing annotations then Cortana could potentially read metadata hidden within tweets as well as combing through emails.

    Expect that in the future Cortana will parse new data sources, find new types of information and allow more 3rd parties to act upon it. The potential for a degraded user experience with notifications flashing in your face all the time is a worry but frankly I don’t care, this is all fascinating stuff to a data geek like me.

    @Jamiet 

    P.S. Thank you to Savas Parastatidis from the Cortana dev team for highlighting Cortana’s use of schema.org.

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement