THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

The Bit Bucket (Greg Low): IDisposable

Ramblings of Greg Low (SQL Server MVP, MCM and Microsoft RD) - SQL Down Under

  • Source code control - SQL Server and Oracle

    One of the things that I find very poorly done in most SQL Server sites that I visit is source code control for database objects.

    You can connect SQL Server Management Studio with source code control systems, to make it easier to manage scripts, etc. One way of doing this is to configure an SCCI (source code control interface) provider. You can get the one that works with SSMS here:

    http://visualstudiogallery.msdn.microsoft.com/bce06506-be38-47a1-9f29-d3937d3d88d6/view/Reviews/

    Once you've installed that, you'll find an "Add to source control" option appearing when you create new script projects. If it doesn't seem to be enabled, see this article:

    http://technet.microsoft.com/en-us/library/ms173550(v=SQL.105).aspx

    You'll also need a TFS (Team Foundation Services) server or an SVN server. A really good option now is the TFS Online offering which is also free for up to 5 users:

    http://tfs.visualstudio.com/en-us/pricing/tfs-information 

    Another option to consider, particularly if you work with other database engines as well are the 3rd party tools. I've previously mentioned the Red-Gate source code control tools. You'll find info on them here:

    http://www.red-gate.com/products/sql-development/sql-source-control/

    A key advantage of these is that they are pretty easy to use and work with a wider variety of source code control engines. As well as TFS and SVN, they work with Git, Mercurial, Vault, Perforce and others.

    Today they have announced updated support for Oracle:

    http://www.red-gate.com/products/oracle-development/source-control-for-oracle/

    It works with SVN and TFS.

    Either way, there are lots of offerings out there now. It's important that you start to investigate one of them if you haven't already done so.

  • Choosing between tabular and multidimensional models in Analysis Services

    At TechEd Australia last week, I presented a number of sessions.

    The first of these is now available online. It was a session on my thoughts on how you'd choose between staying with multidimensional models in SSAS or moving to the newer tabular models.

    Enjoy!

    http://channel9.msdn.com/Events/TechEd/Australia/2013/DBI212

     

  • Azure is the quickest way to learn SQL Server 2014 and Windows Server 2012 R2

    For the DayZero event at TechEd Australia last week, I presented a session on SQL Server 2014. Lots of people were keen to try SQL Server 2014 but said they don't have much time, or they don't have infrastructure to let them do it.

    Either way, this is another example where Azure is really useful. You can spin up an Azure VM that has SQL Server 2014 CTP1 preinstalled on Windows Server. You can even choose a template that has Windows Server 2012 R2 if you want to also get exposure to that operating system. No doubt, an image with CTP2 will become available when that preview is ready.

    Creating these VMs is really easy and takes only a few minutes and is the easiest way to learn to use these products.

    There are free trials available within Azure. And as I mentioned in a previous post, if you have an MSDN subscription, you really should activate the Azure benefits, especially before the end of the month while they're giving away an Aston Martin to someone that does so.

    Regardless, activating the benefits gives you a bunch of credit for use with Azure. That credit goes a long way because you now aren't charged for VMs that are stopped. (Note that isn't the same as shut down).

    Details on activation and of the competition are here:

    http://bit.ly/140upRt

     

  • Are you using the Azure benefits in your MSDN subscription? Like an Aston Martin?

    Most developers and DBAs that I come across have some level of access to an MSDN subscription, yet I am surprised about how many have not yet activated the Azure benefits that are associated with those subscriptions.

    If you want to stay relevant within your organisations, it's important to get your head around Azure and the best part is that you can do much of it at no additional cost.

    All you need to do is to activate your benefits. You'll find details here:

    http://bit.ly/140upRt

    And at the moment, there's a chance to win an Aston Martin just for doing so. James Bond fans could be "shaken and stirred" :-)

  • Listeners: the missing link for SQL Server Availability Groups in Azure VMs

    One of the topics that I'm covering in one of my sessions this year at TechEd Australia is around SQL Database and SQL Server in Azure VMs for DBAs. In that session, I'm planning to spend a while on hybrid on-premises / cloud scenarios.

    Availability groups were introduced in SQL Server 2012 and while it's unlikely that you'd use a synchronous replica in an Azure VM (unless the primary was also in an Azure VM), hosting an asynchronous replica in an Azure VM makes a lot of sense for disaster recovery scenarios where latency isn't an issue.

    Availability group listeners provide a method for client applications to be transparently redirected during failover occurrences. The challenge with hosting an availability group replica in an Azure VM has been that an availability group listener required an additional IP address which was not available. So, previously, you could only configure SQL Server in an Azure VM as a database mirroring type of replica.

    Amongst a raft of other great enhancements in the announcement from Scott Guthrie today is really welcome news that the missing link for this scenario has now been provided. Availability group listeners are fully supported when using SQL Server hosted in an Azure VM. This is great news.

  • Report Manager: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel

    I had seen this issue before but found another cause for it today.

    In this case, the client was using SCCM (System Center Configuration Manager) and couldn't access reports from the reporting server. After resolving some other issues, I was left with "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel" when I tried to go to http://servername/reports.

    When I've seen this previously, it's been to do with the trust relationship between a pair of servers. However, I hadn't seen it come up before on a single server.

    When I checked the SSL bindings in Reporting Services Configuration Manager, in the Advanced tab, I found that no certificate was associated with the SSL configuration. When I tried to edit it, and choose the cert from the dropdown list, the certificate binding could not be applied. (It would be useful if it told you why at that point). That had me puzzled until I opened Report Manager in IE and viewed the certificate. The certificate had been generated onsite and had expired.

    Hope this helps someone. (Or helps me the next time I see it and can't recall what it was :-))

  • Do you find T-SQL scripts hard to read with all the square brackets?

    The T-SQL scripting tools (such as SQL Server Management Studio) provide two options for scripting names: either to quote them or to omit quotes.

    If you avoid things like spaces in object names, you can mostly get away without quoting i.e.

    Sales.Customers is just fine and doesn't need to be [Sales].[Customers].

    Even then, one problem that can arise is that a name you had used in your code could become a SQL reserved word. In that case, you either need to change every reference to it (painful), or quote it wherever it's used (also painful). So quoting by default is always a safe option. However, it makes the scripts much harder to read due to visual noise.

    What I feel is really needed is some way to determine if a name needs quoting i.e. a function such as:

    ISVALIDASOBJECTNAME()

    So for example:

    ISVALIDASOBJECTNAME('Sales') would return 1 but ISVALIDASOBJECTNAME('Sales Targets') would return 0.

    This functionality could then be used to extend the QUOTENAME function with an optional parameter that says:

    QUOTENAME('Sales Targets',0) for "always quote" or QUOTENAME('Sales Targets',1) for "quote only if needed). They could leave 0 as the default so it is option and the function still works unchanged when no parameter is supplied.

    That would then make it easy for the team to change SSMS to have a single setting for how you'd like scripts generated.

    If you agree with this, please vote here: https://connect.microsoft.com/SQLServer/feedback/details/796172/isvalidasobjectname-and-quotename-enhancement-to-clean-up-script-readability

     

  • Performance tuning of tabular data models in Analysis Services

    More and more practical information around working with tabular data models is starting to appear as more and more sites get deployed.

    At SQL Down Under, we've already helped quite a few customers move to tabular data models in Analysis Services and have started to collect quite a bit of information on what works well (and what doesn't) in terms of performance of these models. We've also been running a lot of training on tabular data models.

    It was great to see a whitepaper on the performance of these models released today.

    Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services was written by John Sirmon, Greg Galloway, Cindy Gross and Karan Gulati. You'll find it here:

     http://msdn.microsoft.com/en-us/library/dn393915.aspx

  • Automated backups for Windows Azure SQL Database

    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database.

    What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.

    The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.

    Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.

    Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.

    It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.

    This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

  • Should IT professionals learn to type? – Investing in yourself

    Why learn?

    I was listening today to one of Scott Hanselman’s awesome podcasts with Miguel de Icaza and during the show they discussed things that you really should invest in. One list that came up was a good bed, a good chair, and to learn to type. Most of us spend 1/3 of the day in a bed and 1/3 of the day in a chair, so they seem like no-brainers. Typing is an interesting addition to that list.

    Back when I was in school, in those sexist old days, typing wasn’t something that the boys learned. It was something that was taught to the girls, and mostly to those that weren’t heading in an academic direction. Up until about 1990 I was a pretty fast two-finger typist (probably more like a four-finger typist). I could cut code pretty quickly and I wasn’t too concerned about it. Occasionally though, I kept feeling that I could do much better if I learned to type. Eventually I convinced myself that if a sizeable percentage of the population could do it, surely I could learn.

    Is it hard to learn?

    At that time, some pretty decent typing programs for the PC had started to appear. I decided to try to learn to type and chose TypeQuick. It’s interesting (and good for them) to see that they still exist today. So many applications from those days have long since gone. I particularly liked the way that it constantly pushed you just a little bit faster than what you were achieving. I haven’t tried the program since then but I hope it’s just improved over time. But regardless, there are many of these types of programs around.

    But then the real problems began. Most of these types of programs assume that you can spend an hour a day learning to type, and in a few weeks you’ll be able to do so. The implicit assumption though is that’s the only time that you’ll be typing during that time. The hassle with this is that if you type for a living (by cutting code or interacting with IT systems), spending an hour a day doing it the right way, then the rest of the day doing it the wrong way isn’t a formula for success.

    I decided that I needed to break that impasse. Early one fine Sunday morning, I started doing the course and I kept going until I had pretty much finished the course, at around midnight. By then I could barely move my hands at all but I had learned all the basics.

    What I hadn’t figured on though was that the hardest part was to come. During the next four weeks or so, I had tight deadlines that I knew I could meet by typing my old way, and yet I had to force myself to type correctly, even though it was at a fraction of the speed that I could do the wrong way.

    I did force myself to put up with the stress and after about four weeks I was back to about the same typing speed. The brilliant part is that I was then able to keep getting faster and faster. I’d probably now average about three or four times faster than what I could do in my heyday of four-finger typing.

    What difference does it make?

    Clearly if you are writing a bunch of code, you can do so much more productively but there are also many subtle improvements in what you achieve, not just how quickly:

    • You are much more likely to use clear (but often longer) names for objects
    • You are much more likely to write documentation while coding (Miguel noted that people who can’t type tend to always put off tasks like documentation and I have to say that I’ve seen the same)
    • You are much more prepared to throw away code and replace it with better quality code (I’ve noticed that people who don’t type are much more likely to cling onto their first attempts at any piece of code)
    • While it’s possibly an even more minor point, you will simply seem more professional to those that you work with. Professionals invest time in skills that matter in their profession. If you’re reading this, chances are that typing is in that category for you.

    I’m really surprised that typing isn’t considered a core skill for IT people today. Perhaps there will be a time in the future when it won’t matter but it does matter today. Curiously though, it’s often viewed as a less-important skill to master. But those who have done so will all tell you how important it is.

    When I worked at a university, I was pushing for students to learn typing in the first two weeks of their Computer Science degrees. The idea was almost mocked by most lecturers as typing wasn’t something that was taught by universities. Yet the same university struggled to have enough resources for students to use, and the resources they had were tied up by people that took forever to type anything. The real irony is that in most courses, there isn’t much other useful work that the students could do in the first week or two anyway.

    So, if you have made it to this point in the blog, and you can’t yet type, I hope that you’ll consider it.

  • SQL Server and Programming Frameworks

    I have days where I can’t decide if I’m frustrated or sad about how I see SQL Server being used by applications, or if I’m happy that this keeps us in ongoing work.

    Today I’ve been looking at a system that’s having performance issues. There are three key applications on the system. Each comes from a different vendor and when I look at how each one of them interacts with the server, it’s really frustrating. I’ve come to the conclusion that it’s mostly due to application frameworks that are being used. Here’s an example:

    System A that I’m working with (name hidden to protect the guilty) never just issues a query to SQL Server. What it does instead is:

    1. Issues a SET FMTONLY ON, then sends the command, then SET FMTONLY OFF
    2. Creates a static cursor based on the command
    3. Fetches from the cursor
    4. De-allocates the cursor

    This simply isn’t sensible. Apart from just slowing down the data access immensely, here are my issues with this:

    SET FMTONLY ON returns the columns and data types for every code path through the code that you submit to it. That works ok for simple commands but is fragile for anything else. Regardless, I can only presume that it’s performing step #1 as a way of creating an empty dataset/recordset/rowset in the programming language, without having to specify the columns and data types. This is concerning for multiple reasons. First is that SET FMTONLY ON is now deprecated but more importantly, it means that the system is being constantly asked to evaluate something that isn’t actually changing. On the system I was monitoring, the application was asking SQL Server to do this over 60,000 times per hour, yet the answer isn’t going to change. And it is probably occurring just to simplify the creation of the rowset in code (i.e. a one-off one-time effort).

    Creating a static cursor involves executing the query and materializing a copy of all of the data that was retrieved into tempdb. For almost all of the queries in question, tempdb has no need to be involved at all. The fetch command then retrieves the data from tempdb. The de-allocation then removes the copy of the data and the cursor structure from tempdb.

    It’s a tribute to the product that it performs as well as it does, given how it’s often used or (more likely), abused.

  • Is there more to using SQL in Azure than redirecting your connection string?

    At SQL Down Under, we've been working quite a lot over the past year with customers that are moving some of their applications to cloud-based systems, and mostly on Windows Azure. One message that I often hear about using Windows Azure SQL Database (WASD) is that all you need to do is point your application’s connection string to the cloud and all will be good. While there are occasional cases where that is true, that generally isn’t going to give you a great outcome. To really get a great outcome, you generally will need to check out how your application has been designed.

    Here are the most common issues that I see:

    1. Latency cannot be ignored. If you have ever run an application locally that’s connected to a database hosted anywhere else, you’ll realize that network delays can impact you. In Australia, we are eagerly awaiting the availability of local Azure datacenters as they will make a big difference regarding latency. But even when local datacenters are available, your customers might not all be in the same region.

    When you are choosing a datacenter to use, it’s important to check the latency that you are experiencing. The easiest way to do that is to connect to a database in each datacenter using SQL Server Management Studio (SSMS), enable client statistics (from the Query menu, choose Include Client Statistics), then execute a query that will have almost no execution time, such as:

    SELECT 1;

    When the query is complete, on the Client Statistics tab, look at the value for “Wait time on server replies”. Make sure that you test this against other datacenters, not just the ones that seem geographically closest to you. The latency depends upon the distance, the route, and the Internet Service Provider (ISP). Last year I performed some testing in Brisbane (Australia), and the lowest latency was from the Singapore datacenter. However, testing from Auckland (New Zealand) showed the lowest latency when using a US-based datacenter, for the ISP that my customer was connected with.

    2. Round Trips must be avoided. When using a remote datacenter rather than a database on your local network, it’s really important that you achieve as much work as possible in each call to the database engine. For example, last year I was working at a site where I noticed that an application was making 90,000 remote procedure calls between when the application started up and when the first window appeared for the user. It’s a tribute to SQL Server that this was pretty quick (around 30 seconds) on the local network. However, if the connection string for that application was pointed to a database server with 100ms latency, the first screen of the application would take nearly 3 hours to appear!

    Years ago, I used to see the same issue with developers building web applications. Often, they would be developing the applications on the same machine that was hosting both the web server and the database engine. The applications appeared to be fast, but when deployed they could not cope with the user load. The best trick I ever found for that was to make the developers connect to the database via a dial-up 64KB/sec modem. They then realized where the delays were occurring in their code. If the application ran OK on a dial-up link, it then ran wonderfully on a “real” network. Based on that logic, it might be interesting to get your developers to work against a datacenter that has really high latency from the location they are working at. Alternately, consider using a tool that allows you to insert latency into network connections during development.

    The most important lesson, however, is to work out how to reduce the number of round-trips to the database engine. For example, rather than making a call to start a transaction, another call to write an invoice header row, five more calls to write the invoice detail lines, and yet another call to commit the transaction, use techniques like table variables to let you send the entire invoice in a single call, and to avoid having your transactions ever spanning the network calls. Even better, see if you can send multiple invoices at once where that makes sense.

    3. Code Compatibility needs to be considered. While WASD offers good capability with SQL Server, there are some differences. You need to check that your code is compatible. For example, WASD requires a clustered primary key on tables.

    4. Backup and recovery are still important. The team at Microsoft do a great job of looking after your data. I have visited a lot of companies over the years. One of the concerns that I hear expressed about cloud-based systems is about how well the data will be managed. Ironically, from what I’ve seen and experienced of the Azure processes, on their worst day they’ll do a better job than almost any company that I visit. However, that doesn’t mean that you don’t need to be concerned with availability and recovery. You still need to put processes in place to perform periodic backups, and to have recovery plans in place.

    5. Data and schema migration might need to be done differently. I recently took a small on-premises database that was about 14MB in size, scripted out the schema and the data and wanted to move it to the cloud. If I just pointed the script to WASD, it would have taken several hours to run. I found similar issues with using SQL Server Integration Services (SSIS). Once again, latency was the killer. However, exporting the database to a BACPAC in Azure Storage and then importing the database using the Azure portal took a total of about 2 minutes!

    While your existing skills are important when working in these environments and in many cases will still work, there are often quicker more effective new ways to get things done. So keep learning! You might also find some interesting insights in the SQL Down Under podcast (show 51) that I did with Conor Cunningham from the SQL product team.

  • SQL Down Under podcast 60 with SQL Server MVP Adam Machanic

    I posted another podcast over the weekend. Late last week, I managed to get a show recorded with Adam Machanic. Adam's always fascinating. In this show, he's talking about what he's found regarding increasing query performance using parallelism. Late in the show, he gives his thoughts on a number of topics related to the upcoming SQL Server 2014.

    Enjoy!

    The show is online now: http://www.sqldownunder.com/Podcasts

     

  • Adding a Network Loopback Adapter to Windows 8

    I have to say that I continue to be frustrated with finding out how to do things in Windows 8. Here's another one and it's recorded so it might help someone else. I've also documented what I tried so that if anyone from the product group ever reads this, they'll understand how I searched for it and might try to make it easier.

    I wanted to add a network loopback adapter, to have a fixed IP address to work with when using an "internal" network with Hyper-V. (The fact that I even need to do this is also painful. I don't know why Hyper-V can't make it easy to work with host system folders, etc. as easily as I can with VirtualPC, VirtualBox, etc. but that's a topic for another day).

    In the end, what I needed was a known IP address on the same network that my guest OS was using, via the internal network (which allows connectivity from the host OS to/from guest OS's).

    I started by looking in the network adapters areas but there is no "add" functionality there. Realising that this was likely to be another unexpected challenge, I resorted to searching for info on doing this. I found KB article 2777200 entitled "Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012". Aha, I thought that's what I'd need. It describes the symptom as "You are trying to install the Microsoft Loopback Adapter, but are unable to find it." and that certainly sounded like me. There's a certain irony in documenting that something's hard to find instead of making it easier to find. Anyway, you'd hope that in that article, they'd then provide a step by step example of how to do it, but what they supply is this:

    The Microsoft Loopback Adapter was renamed in Windows 8 and Windows Server 2012. The new name is "Microsoft KM-TEST Loopback Adapter". When using the Add Hardware Wizard to manually add a network adapter, choose Manufacturer "Microsoft" and choose network adapter "Microsoft KM-TEST Loopback Adapter".

    The trick with this of course is finding the "Add Hardware Wizard". In Control Panel -> Hardware and Sound, there are options to "Add a device" and for "Device Manager". I tried the "Add a device" wizard (seemed logical to me) but after that wizard tries it's best, it just tells you that there isn't any hardware that it thinks it needs to install. It offers a link for when you can't find what you're looking for, but that leads to a generic help page that tells you how to do things like turning on your printer.

    In Device Manager, I checked the options in the program menus, and nothing useful was present. I even tried right-clicking "Network adapters", hoping that would lead to an option to add one, also to no avail.

    So back to the search engine I went, to try to find out where the "Add Hardware Wizard" is. Turns out I was in the right place in Device Manager, but I needed to right-click the computer's name, and choose "Add Legacy Hardware". No doubt that hasn't changed location lately but it's a while since I needed to add one so I'd forgotten. Regardless, I'm left wondering why it couldn't be in the menu as well.

    Anyway, for a step by step list, you need to do the following:

    1. From Control Panel, select "Device Manager" under the "Devices and Printers" section of the "Hardware and Sound" tab.

    2. Right-click the name of the computer at the top of the tree, and choose "Add Legacy Hardware".

    3. In the "Welcome to the Add Hardware Wizard" window, click Next.

    4. In the "The wizard can help you install other hardware" window, choose "Install the hardware that I manually select from a list" option and click Next.

    5. In the "The wizard did not find any new hardware on your computer" window, click Next.

    6. In the "From the list below, select the type of hardware you are installing" window, select "Network Adapters" from the list, and click Next.

    7. In the "Select Network Adapter" window, from the Manufacturer list, choose Microsoft, then in the Network Adapter window, choose "Microsoft KM-TEST Loopback Adapter", then click Next.

    8. In the "The wizard is ready to install your hardware" window, click Next.

    9. In the "Completing the Add Hardware Wizard" window, click Finish.

    Then you need to continue to set the IP address, etc.

    10. Back in Control Panel, select the "Network and Internet" tab, click "View Network Status and Tasks".

    11. In the "View your basic network information and set up connections" window, click "Change adapter settings".

    12. Right-click the new adapter that has been added (find it in the list by checking the device name of "Microsoft KM-TEST Loopback Adapter"), and click Properties.

     

     

     

  • Two types of SSL available now for Azure Websites

    If you have followed any of the Azure website forums, you'd realize that having SSL available was one of the highest rated requests. That's hardly surprising given the number of clients that want to use these sites for eCommerce work.

    So it's really good to see that Scott Guthrie and the team have been listening and that SSL is now available. The pricing is shown here: http://www.windowsazure.com/en-us/pricing/details/web-sites/ To see the details, click on the "SSL" button in the "Learn More" area. An FAQ is also provided.

    SSL has been provided in two forms. SNI (Server Name Indication) is the method that allows multiple sites to share a common IP address yet have different certificates. That's the lower cost option currently at $6 USD per month.

    Most browsers nowadays work properly with SNI based SSL, but if you do have to deal with earlier versions that don't (such as IE6 or browsers on Windows XP), you can get an IP based SSL configured instead. That currently costs $26 USD per month.

    This is really good news.

This Blog

Syndication

Tags

No tags have been created or used yet.
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement