THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

The Bit Bucket (Greg Low): IDisposable

Ramblings of Greg Low (SQL Server MVP, MCM and Microsoft RD) - SQL Down Under

  • Reliably Dropping a Database in a T-SQL Script is Too Hard

    Recently I’ve been working on a system where I had to create a test script that reliably recreated a database each time it was executed. I’ve done this many times before and thought I knew how but I was wrong. It turns out that reliably dropping a database in a script that you run from SQL Server Management Studio (SSMS) is harder than it looks. (It’s also harder than it should be).

    In Books Online (BOL), the article for DROP DATABASE says :

    “You cannot drop a database currently being used. This means open for reading or writing by any user. To remove users from the database, use ALTER DATABASE to set the database to SINGLE_USER.”

    Based on this, I had been using the following style of script to drop and recreate a database:

    USE master;
    GO

    IF EXISTS(SELECT 1 FROM sys.databases WHERE name = N'Blah')
    BEGIN
        ALTER DATABASE Blah SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
        DROP DATABASE Blah;
    END;
    GO

    CREATE DATABASE Blah ON

    ...

    However, what I’ve found is that when I execute this from SSMS, it doesn’t always work. Fairly randomly (and hard to reproduce), it fails with an error message telling me that the DROP failed because the database was in use. But isn’t that the whole purpose of setting it to single user?

    The problem seems to be that although the database gets set to single user, another connection could be made to it before the DROP DATABASE statement occurs. Setting the database to SINGLE_USER isn’t enough as the current database context for the script is the master database, not the database in question.

    A number of users and fellow MVPs have told me that they’ve experienced the same issue. What we suspect is causing this is the Intellisense system within SSMS connecting to the database to check the syntax of objects later in the same script. The problem only really seems to happen in SSMS but it could happen elsewhere.

    A great suggestion from Rob Farley was to set the database OFFLINE instead of SINGLE_USER. While that would stop the other connection, the problem is that when you DROP a database that is OFFLINE, the files for the database don’t get deleted. You’d then need to have some additional script (nasty) to delete the files as well after the database was dropped.

    What is really needed to get around this race condition is:

    DROP DATABASE WITH ROLLBACK IMMEDIATE;

    If you agree, I’ve created a Connect item here that you can vote on:

    https://connect.microsoft.com/SQLServer/feedback/details/813014/cannot-reliably-drop-a-database-in-t-sql-script

  • SQL Server 2014 Hybrid: Storing data files in Azure storage - Bizarre or not?

    In the sessions that I was attending in Redmond yesterday, I was interested to see the presenter discussing the idea of having individual database files in the cloud as some sort of bizarre option that’s now available in SQL Server 2014. I don’t see it as bizarre at all. However, I see two distinct use cases that make complete sense to me:

     

    Overcoming Drive Limitations in Azure-based SQL Virtual Machines

     

    One of the current limitations of running SQL Server in Azure virtual machines is that you can only attach two additional data drives per CPU core, and each of those drives is up to 1 TB in size. For example, if you spin up a two core VM, you can attach 4 data drives.

     

    By allocating URLs for data drives, you can overcome this limitation. In a recent whitepaper that I was a reviewer on, we discussed the IOPs and size limitations of Azure storage based drives, both at the individual drive level and the storage account level. Please read that whitepaper for more info on those characteristics that remain unchanged. A core takeaway though is that you shouldn’t mix storage accounts for files that make up a single database, to make sure that you maintain write ordering. You could use different storage accounts (to overcome single account limits) for different databases though.

     

    Tiered Storage for Partitioned Tables in On-Premises Databases

     

    I commonly implement partitioned tables for sliding window scenarios. For example, a customer might want to keep 7 years of data online at any time. If table partitioning isn’t familiar to you, please read another whitepaper that I was a reviewer on. Even though it was created for SQL Server 2008, it’s pretty much still spot on.

     

    One way that I use partitioned tables is to keep active data in the current partition, and to keep older data in separate partitions, often one per month. This usually allows me to have the older data in read-only filegroups. This gives you a great situation where your application just sees a normal table, but most of the table is read-only. There are several advantages of this, some of which are:

    - While locks are still acquired on the read-only data, hints like NOLOCK are then safe to use on that data.

    - You won’t have a query accidentally overwrite your older data.

    - The older data could be stored on slower media.

    - The older data could have a higher compression rate. For example, you could use ROW compression for the active data, and PAGE compression for the older data.

    - SQL Server doesn’t need to run recovery on read-only filegroups.

     

    Using data files in Azure storage could make really good sense for some of this older data. You could store your old read-only data in storage files and keep it available. So your current data would be fast to access but your older data would still be present, but slower to access.

     

    I could see a situation where you might never need to remove your old data.

     

    One other question that comes up in this situation relates to how backups work. Clearly you don’t want to be pulling all your data out of the cloud when you perform a backup, particularly if you are going to also backup to the cloud. However, piecemeal backups in SQL Server are perfect for this. If you just store your read-only filegroups and archive data in Azure storage files, you can safely perform a backup with READ_WRITE_FILEGROUPS_ONLY.

     

    All in all, I see that some of these new hybrid scenarios could be quite compelling, and not bizarre at all.

  • More updates to Azure: Which changes relate to SQL people?

    The Azure team keep rolling out improvements at an amazing pace. Scott Guthrie posted recently about the latest set of changes. Here are the ones that I think are most important for SQL people:

    Import/Export Hard Drives

    Even though Azure storage accounts can hold a very large amount of data, one of the big questions has always been about how to get that data into the storage account in the first place. That question is now answered because the new Windows Azure Import/Export service lets you ship hard drives directly to the Azure team for uploading. The reverse is also available. If you export a large amount of data to a storage account, you can move it onto a hard drive and have the drive sent to you.

    Currently the service uses FedEx for transport. If you are concerned about the security of data in transit, bit locker encryption is also supported in both directions.

    HDInsight Out of Preview Mode

    HDInsight has made its way out of preview and into general availability. It allows you to process large volumes of data (in Azure Storage) using Apache Hadoop tools (including Pig and Hive).

    HDInsight gives you a way of rapidly (and economically) spinning up a Hadoop cluster when you need one, and shut it down again when you are done.

    Virtual Machine Gallery

    As the number of virtual machine images has continued to grow, it’s been getting harder and harder to find the image that you want when creating a new virtual machine. UI changes have been made to make this easier.

    One of the discussions that the team has been having lately is around how long images should be kept. For example, once SQL Server 2014 ships, should images of SQL Server 2008 R2, or SQL Server 2008 also be kept? I’d like to see them all kept but I’m not sure how much ongoing maintenance that is for the team. Also, if they are all kept, should they only keep the version with the operating system that was the most current at the time? If anyone has strong opinions on this, I’d love to hear them and I’ll pass them on.

    General Changes

    You can now configure endpoint security using PowerShell.

    You can now configure alerts related to billing. You can get an email when you bill exceeds a given threshold. One thing I’d really like to see (which isn’t there yet) is the ability to simply have your bill emailed to you each month, even when it’s prepaid. Does anyone else feel the need for that?

    For further details on areas that I haven’t mentioned, you’ll find Scott’s post here: http://weblogs.asp.net/scottgu/archive/2013/11/04/windows-azure-import-export-hard-drives-vm-acls-web-sockets-remote-debugging-continuous-delivery-new-relic-billing-alerts-and-more.aspx

  • Technet Live: What's new in SQL Server 2014

    Hi Folks,

    If you want to get your head around the core improvements coming in the SQL Server 2014 wave, I'm presenting a Technet Live session on November 29th (Australian time).

    Registration is here: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032570709&Culture=en-AU&community=0

    I'd love to see you online. 

  • SDU Podcast #61: Dr David DeWitt

    I had the distinct honour (honor) this week of recording a new SQL Down Under podcast with Dr David DeWitt from the Jim Gray Systems Lab at Microsoft. Anyone that has been to a PASS summit over the last few years would know that David seems to have single-handedly converted morning keynotes into must-attend events.

    In this podcast, David explains his role, gives a philosophical view on where we're at in the database industry and discusses core enhancements in SQL Server 2014.

    You'll find it here: http://www.sqldownunder.com/Podcasts

    Enjoy! 

  • Webinar: SQL Server Compression Technologies

    A while back, we changed the format of our monthly SQL PASS meetings to a virtual format for most meetings, as it makes it easier for a lot of people to attend.

    Tomorrow (lunch time Melbourne time), I'm delivering another one on compression technologies in SQL Server. In this session, we'll take a tour through vardecimal in 2005, then onto row and page compression in 2008, then xVelocity based compression in 2012, and finally looking at what 2014 offers in this regard.

    We have a limit on the number of attendees so please don't register if you can't make it but if you can, we'd love to see you online.

    https://www4.gotomeeting.com/register/163499127

  • Latest Azure updates related to SQL (and Oracle)

    Azure updates continue to come thick and fast.

    Scott Guthrie has posted again today about a number of enhancements just released. From these, I think the following are the most interesting for SQL Server devotees:

    1. I posted last week about the new A5 size Azure VMs. These will be a good option for many SQL VMs, particularly those based on tabular data models.

    2. Microsoft continues their partnership with Oracle by providing VM templates with Oracle software preinstalled. This is by far the quickest way to try any of this software, particularly for SQL Server folk that just want to try something on Oracle versions. Versions supported are:

    • Weblogic Server 12c and 11g
    • Database 12c and 11g
    • Java Platform Standard Edition 6

    3. You can now also modify VMs that are stopped, for operations like adding disks.

    4. A single Azure subscription can now deal with multiple Active Directories. This is really helpful if you need to extend your on-premises AD to the cloud, particularly if you're starting to investigate the hybrid SQL Server scenarios such as using Azure for a DR site with Availability Group replicas.

    5. Subscription suspension no longer deletes virtual machines either.

    This is another great set of updates. I'm truly impressed by the speed that these are appearing at.

     

  • Auto-delete Azure SQL Database Servers with no databases

    In Windows Azure SQL Database, a database "server" is a logical concept that's used to keep details of connected databases. It does have a "master" database but it's not like the on-premises equivalent. It's pretty much a container for logins.

    Generally there is no point provisioning a server that has no databases. Servers aren't charged separately, only the user databases are charged for.

    The Azure team has now made a change that means that if you have a server that hasn't had any user databases within the last 90 days, it will be automatically deleted. It's important to be aware that this can happen. The only thing that would be lost would be any pre-configured logins. You would need to recreate them when you later decide to provision a user database.

  • New smaller memory-intensive VM size in Azure

    Previously, when you selected a VM size in Azure, you had to choose a large size to get a bunch of memory.

    Now a new A5 size has been provided and you can get 14GB of memory on a VM that only has two virtual cores, rather than only on the 8 core servers that were available before. The pricing is about 2/3 of the price of the 8 core version.

    You can find details here:

    http://www.windowsazure.com/en-us/pricing/details/virtual-machines/

    This might be a good option for memory-intensive applications that aren't CPU-intensive.

  • TechEd Oz Session Now Online - Reminder about Azure benefits for MSDN

    I previously posted about enabling your Azure benefits before the end of this month if you have an MSDN subscription. Doing so doesn't cost any more, and it adds a bunch of money for Azure credits into your account each month.

    If you have an MSDN subscription and you haven't do it, don't put it off. Follow the link and do so (plus you might win an Aston Martin just for doing so):

    http://bit.ly/140upRt 

    And if you're wondering about how to configure Azure VMs for SQL Server, my TechEd Oz session is now available on-demand from Channel9:

    http://channel9.msdn.com/Events/TechEd/Australia/2013/AZR323

     

  • Source code control - SQL Server and Oracle

    One of the things that I find very poorly done in most SQL Server sites that I visit is source code control for database objects.

    You can connect SQL Server Management Studio with source code control systems, to make it easier to manage scripts, etc. One way of doing this is to configure an SCCI (source code control interface) provider. You can get the one that works with SSMS here:

    http://visualstudiogallery.msdn.microsoft.com/bce06506-be38-47a1-9f29-d3937d3d88d6/view/Reviews/

    Once you've installed that, you'll find an "Add to source control" option appearing when you create new script projects. If it doesn't seem to be enabled, see this article:

    http://technet.microsoft.com/en-us/library/ms173550(v=SQL.105).aspx

    You'll also need a TFS (Team Foundation Services) server or an SVN server. A really good option now is the TFS Online offering which is also free for up to 5 users:

    http://tfs.visualstudio.com/en-us/pricing/tfs-information 

    Another option to consider, particularly if you work with other database engines as well are the 3rd party tools. I've previously mentioned the Red-Gate source code control tools. You'll find info on them here:

    http://www.red-gate.com/products/sql-development/sql-source-control/

    A key advantage of these is that they are pretty easy to use and work with a wider variety of source code control engines. As well as TFS and SVN, they work with Git, Mercurial, Vault, Perforce and others.

    Today they have announced updated support for Oracle:

    http://www.red-gate.com/products/oracle-development/source-control-for-oracle/

    It works with SVN and TFS.

    Either way, there are lots of offerings out there now. It's important that you start to investigate one of them if you haven't already done so.

  • Choosing between tabular and multidimensional models in Analysis Services

    At TechEd Australia last week, I presented a number of sessions.

    The first of these is now available online. It was a session on my thoughts on how you'd choose between staying with multidimensional models in SSAS or moving to the newer tabular models.

    Enjoy!

    http://channel9.msdn.com/Events/TechEd/Australia/2013/DBI212

     

  • Azure is the quickest way to learn SQL Server 2014 and Windows Server 2012 R2

    For the DayZero event at TechEd Australia last week, I presented a session on SQL Server 2014. Lots of people were keen to try SQL Server 2014 but said they don't have much time, or they don't have infrastructure to let them do it.

    Either way, this is another example where Azure is really useful. You can spin up an Azure VM that has SQL Server 2014 CTP1 preinstalled on Windows Server. You can even choose a template that has Windows Server 2012 R2 if you want to also get exposure to that operating system. No doubt, an image with CTP2 will become available when that preview is ready.

    Creating these VMs is really easy and takes only a few minutes and is the easiest way to learn to use these products.

    There are free trials available within Azure. And as I mentioned in a previous post, if you have an MSDN subscription, you really should activate the Azure benefits, especially before the end of the month while they're giving away an Aston Martin to someone that does so.

    Regardless, activating the benefits gives you a bunch of credit for use with Azure. That credit goes a long way because you now aren't charged for VMs that are stopped. (Note that isn't the same as shut down).

    Details on activation and of the competition are here:

    http://bit.ly/140upRt

     

  • Are you using the Azure benefits in your MSDN subscription? Like an Aston Martin?

    Most developers and DBAs that I come across have some level of access to an MSDN subscription, yet I am surprised about how many have not yet activated the Azure benefits that are associated with those subscriptions.

    If you want to stay relevant within your organisations, it's important to get your head around Azure and the best part is that you can do much of it at no additional cost.

    All you need to do is to activate your benefits. You'll find details here:

    http://bit.ly/140upRt

    And at the moment, there's a chance to win an Aston Martin just for doing so. James Bond fans could be "shaken and stirred" :-)

  • Listeners: the missing link for SQL Server Availability Groups in Azure VMs

    One of the topics that I'm covering in one of my sessions this year at TechEd Australia is around SQL Database and SQL Server in Azure VMs for DBAs. In that session, I'm planning to spend a while on hybrid on-premises / cloud scenarios.

    Availability groups were introduced in SQL Server 2012 and while it's unlikely that you'd use a synchronous replica in an Azure VM (unless the primary was also in an Azure VM), hosting an asynchronous replica in an Azure VM makes a lot of sense for disaster recovery scenarios where latency isn't an issue.

    Availability group listeners provide a method for client applications to be transparently redirected during failover occurrences. The challenge with hosting an availability group replica in an Azure VM has been that an availability group listener required an additional IP address which was not available. So, previously, you could only configure SQL Server in an Azure VM as a database mirroring type of replica.

    Amongst a raft of other great enhancements in the announcement from Scott Guthrie today is really welcome news that the missing link for this scenario has now been provided. Availability group listeners are fully supported when using SQL Server hosted in an Azure VM. This is great news.

This Blog

Syndication

Tags

No tags have been created or used yet.
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement