THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Rob Farley

- Owner/Principal with LobsterPot Solutions (a MS Gold Partner consulting firm), Microsoft Certified Master, Microsoft MVP (SQL Server), APS/PDW trainer and leader of the SQL User Group in Adelaide, Australia. Rob is a former director of PASS, and provides consulting and training courses around the world in SQL Server and BI topics.

  • Interviews and niches

    T-SQL Tuesday turns this month to the topic of job interviews. Kendra Little (@kendra_little) is our host, and I really hope her round-up post is in the style of an interview. I’m reminded of a T-SQL Tuesday about three years ago on a similar topic, but I’m sure there will be plenty of new information this time around – the world has moved on.

    I’m not sure when my last successful job interview was. I know I went through phases when I guess I was fairly good in job interviews (because I was getting job offers), and phases when I was clearly not very good in job interviews (because I would get interviews but not be able to convert them into job offers), and at some point I reached a point where I stopped doing interviews completely. That’s the phase I’m still in.

    I hit that point when I discovered my niche (which sounds like “neesh” in my language, not “nitch”). For me it was because I realised that I had a knack for databases and starting exploring that area more – writing, presenting, helping others – until people noticed and started approaching me. That’s when interviewing stops being a thing. It doesn’t necessarily mean going starting your own business, or even changing jobs – it just means that people know who you are and come to you. You no longer have to sit in front of a panel and prove your worth, because they’ve already decided they want you.

    So now people approach me for work through LobsterPot Solutions, and although there is sometimes a bidding phase when we need to compete against other companies, there is no ‘interview’ process in the way that there was when I was an employee.

    What’s your niche? And are you spending time developing that?

    There’s career-advice that talks about the overlap between something you enjoy doing, something you’re good at, and something that people are prepared to pay for. The thing is that people won’t pay you for it unless they know that you’re the person they need, rather than someone else. So get yourself out there. Prove yourself. Three years ago I asked “When is your interview” and said that you need to realise that even before your interview they’ve researched you, considered your reputation, and all of that. Today I want to ask you how your niche is going. Have you identified that thing you enjoy, and that people will pay for? And are you developing your skills in that area?

    Your career is up to you. You can respond to job ads and have interviews. Or you can carve your own space.

    Good luck.

    @rob_farley

  • Learning the hard way – referenced objects or actual objects

    This month’s T-SQL Tuesday is about lessons we’ve learned the hard way. Which, of course, is the way you learn best. It’s not the best way to learn, but if you’ve suffered in your learning somewhat, then you’re probably going to remember it better. Big thanks to Raul Gonzalez (@sqldoubleg) for dragging up these memories.

    Oh, I could list all kinds of times I’ve learned things the hard way, in almost every part of my life. But let’s stick to SQL.

    This was a long while back… 15-20 years ago.

    There was a guy who needed to get his timesheets in. It wasn’t me – I just thought I could help …by making a copy of his timesheets in a separate table, so that he could prepare them there instead of having to use the clunky Access form. I’d gone into the shared Access file that people were using, made a copy of it, and then proceeded to clear out all the data that wasn’t about him, so that he could get his data ready. I figured once he was done, I’d just drop his data in amongst everyone else’s – and that would be okay.

    Except that right after I’d cleared out everyone else’s data, everyone else started to complain that their data wasn’t there.

    Heart-rate increased. I checked that I was using the copy, not the original… I closed it, opened the original, and saw that sure enough, only his data was there. Everyone else’s (including my own) data was gone.

    And then it dawned on me – these tables were linked back to SQL in the back end. I’d copied the reference, but it was still pointing at the same place. All that data I’d deleted was gone from the actual table. I walked over to the boss and apologised. Luckily there was a recent backup, but I was still feeling pretty ordinary.

    These kinds of problems can hurt in all kinds of situations, even if you’re not using Access as a front-end. Other applications, views within SQL, Linked Servers, linked reports – plenty of things contain references rather than the actual thing. When you delete something, or change something, or whatever, you had better be sure that you’re working in the right environment.

    I don’t even know the best way to have confidence that you’re safe on this. You can help by colouring Prod tabs differently in SSMS with SSMS Tools Pack, but it’s not going to guarantee that you’re okay. You need to be a little paranoid about it. Learn to check and double-check. Because ultimately, data is too valuable to make that kind of mistake.

    @rob_farley

  • DevOps and your database

    I’m a consultant. That means I have to deal with whatever I come across at customer sites. I can recommend change, but when I’m called in to fix something, I generally don’t get to insist on it. I just have to get something fixed. That means dealing with developers (if they exist) and with DBAs, and making sure that anything that I try to fix somehow works for both sides. That means I often have to deal with the realm of DevOps, whether or not the customer knows it.

    DevOps is the idea of having a development story which improves operations.

    Traditionally, developers would develop code without thinking much about operations. They’d get some new code ready, deploy it somehow, and hope it didn’t break much. And the Operations team would brace themselves for a ton of pain, and start pushing back on change, and be seen as a “BOFH”, and everyone would be happy. I still see these kinds of places, although for the most part, people try to get along.

    With DevOps, the idea is that developers work in a way that means that things don’t break.

    I know, right.

    If you’re doing the DevOps things at your organisation, you’re saying “Yup, that’s normal.” If you’re not, you’re probably saying “Ha – like that’s ever going to happen.”

    But let me assure you – it can. For years now, developers have been doing Continuous Integration, Test-Driven Development, Automated Builds, and more. I remember seeing these things demonstrated at TechEd conferences in the middle of the last decade.

    But somehow, these things are still considered ‘new’ in the database world. Database developers look at TDD and say “It’s okay for a stateless environment, but my database changes state with every insert, update, or delete. By its very definition, it’s stateful.”

    The idea that a stored procedure with particular parameters should have a specific impact on a table that particular characteristics (values and statistics – I would assume structure and indexes would be a given) isn’t unreasonable. And it’s this that can lead to the understanding that whilst a database is far from stateless, state can be a controllable thing. Various states can become part of various tests: does the result still apply when there are edge-case rows in the table?; is the execution plan suitable when there are particular statistics in play?; is the amount of blocking reasonable when the number of transactions is at an extreme level?

    Test-driven development is a lot harder in the database-development world than in the web-development world. But it’s certainly not unreasonable, and to have confidence that changes won’t be breaking changes, it’s certainly worthwhile.

    The investment to implement a full test suite for a database can be significant, depending on how thorough it needs to be. But it can be an incremental thing. Elements such as source control ought to be put in place first, but there is little reason why database development shouldn’t adhere to DevOps principles.

    @rob_farley

    (Thanks to Grant Fritchey (@gfritchey) - for hosting this month’s T-SQL Tuesday event)

  • “Stored procedures don’t need source control…”

    Hearing this is one of those things that really bugs me.

    And it’s not actually about stored procedures, it’s about the mindset that sits there.

    I hear this sentiment in environments where there are multiple developers. Where they’re using source control for all their application code. Because, you know, they want to make sure they have a history of changes, and they want to make sure two developers don’t change the same piece of code, maybe they even want to automate builds, all those good things.

    But checking out code and needing it to pass all those tests is a pain. So if there’s some logic that can be put in a stored procedure, then that logic can be maintained outside the annoying rigmarole of source control. I guess this is appealing because developers are supposed to be creative types, and should fight against the repression, fight against ‘the man’, fight against [source] control.

    When I come across this mindset, I worry a lot.

    I worry that code within stored procedures could be lost if multiple people decide to work on something at the same time.

    I worry that code within stored procedures won’t be part of a test regime, and could potentially be failing to consider edge cases.

    I worry that the history of changes won’t exist and people won’t be able to roll back to a good version.

    I worry that people are considering that this is a way around source control, as if source control is a bad thing that should be circumvented.

    I just worry.

    And this is just talking about code in stored procedures. Let alone database design, constraints, indexes, rows of static data (such as lookup codes), and so on. All of which contribute to a properly working application, but which many developers don’t consider worthy of source control.

    Luckily, there are good options available to change this behaviour. Red Gate’s Source Control is tremendously useful, of course, and the inclusion of many Red Gate’s DevOps tools within VS2017 would suggest that Microsoft wants developers to take this more seriously than ever.

    For more on this kind of stuff, go read the other posts about this month’s T-SQL Tuesday!

    TSQL2sDay150x150

    @rob_farley

  • Time waits for no one

    And technology changes as quickly as the numbers on a clock. A digital clock, of course – the numbers never change on an analogue one.

    I think it’s nice to have this month’s T-SQL Tuesday (hosted by Koen Verbeeck (@ko_ver)) on this topic, as I delivered a keynote at the Difinity conference a couple of months ago on same thing.

    In the keynote, I talked about the fear people have of becoming obsolete as technology changes. Technology is introduced that trivialises their particular piece of skill – the database that removes the need for a filing cabinet, the expert system that diagnoses sick people, and the platform as a service that is managed by someone other than the company DBA. As someone who lives in Adelaide, where a major car factory has closed down, costing thousands of jobs, this topic is very much at the forefront of a lot of people’s thoughts. The car industry has been full of robots for a very long time – jobs have been disappearing to technology for ages. But now we are seeing the same happen in other industries, such as IT.

    Does Automatic Tuning in Azure mean the end of query tuners? Does Self-Service BI in Excel and Power BI mean the end of BI practitioners? Does PaaS mean the end of DBAs?

    I think yes. And no.

    Yes, because there are tasks that will disappear. For people that only do one very narrow thing, they probably have reason to fear. But they’ve had reason to fear for a lot longer than Azure has been around. If all you do is check that backups have worked, you should have expected to be replaced by a script a very long time ago. The same has applied in many industries, from production lines in factories to ploughing lines in fields. If your contribution is narrow, you are at risk.

    But no, because the opportunity here is to use the tools to become a different kind of expert. The person who drove animals to plough fields learned to drive tractors, but could use their skills in ploughing to offer a better service. The person who painted cars in a factory makes an excellent candidate for retouching dent repair, or custom paint jobs. Their expertise sets them apart from those whose careers didn’t have the same background.

    As a BI practitioner today, self-service BI doesn’t present a risk. It’s an opportunity. The opportunity is to lead businesses in their BI strategies. In training and mentoring people to apply BI to their businesses. To help create visualisations that convey the desired meaning in a more effective way than the business people realise. This then turns the BI practitioner into a consultant with industry knowledge. Or a data scientist who can transform data to bring out messages that the business users couldn’t see.

    As the leader of a company of database experts, these are questions I’ve had to consider. I don’t want my employees or me to become obsolete. We don’t simply offer health checks, BI projects, Azure migrations, troubleshooting, et cetera. We lead business through those things. We mentor and train. We consult. Of course, we deliver, but we are not simply technicians. We are consultants.

    @rob_farley

    TSQL2sDay150x150

  • SQL WTF for T-SQL Tuesday #88

    The topic for this month’s T-SQL Tuesday is:

    “Be inspired by the IT horror stories from http://thedailywtf.com, and tell your own daily WTF story. The truly original way developers generated SQL in project X. Or what the grumpy "DBA" imposed on people in project Y. Or how the architect did truly weird "database design" on project Z”

    And I’m torn.

    I haven’t missed a T-SQL Tuesday yet. Some months (okay, most months) it’s the only blog post I write. I know I should write more posts, but I simply get distracted by other things. Other things like working for clients, or spending time with the family, or sometimes nothing (you know – those occasions when you find yourself doing almost nothing and time just slips away, lost to some newspaper article or mindless game that looked good in the iTunes store). So I don’t want to miss one.

    But I find the topic painful to write about. Not because of the memories of some of the nasty things I’ve seen at customer sites – that’s a major part of why we get called in. But because I wouldn’t ever want to be a customer who had a bad story that got told. When I see you tweeting things like “I’m dying in scalar-function hell today”, I always wonder who knows which customer you’re visiting today, or if you’re not a consultant whether your employer knows what you’re tweeting. Is your boss/customer okay with that tweet’s announcement that their stuff is bad? What if you tweet “Wow – turns out our website is susceptible to SQL Injection attacks!”? Or what if you write “Oh geez, this customer hasn’t had a successful backup in months…”? At what point does that become a problem for them? Is it when customers leave? Is it when they get hacked? Is it when their stock price drops? (I doubt the tweet of a visiting consultant would cause a stock price to fall, but still…)

    So I’m quite reluctant to write this blog post at all. I had to think for some time before I thought of a scenario that I was happy to talk about.

    This place was never a customer, and this happened a long time ago. Plus, it’s not a particularly rare situation – I just hadn’t seen it become this bad. So I’m happy enough to talk about this...

    There was some code that was taking a long time to execute. It was populating a table with a list of IDs of interest, along with a guid that had been generated for this particular run. The main queries ran, doing whatever transforms they needed to do, inserting and updating some other tables, and then the IDs of interest were deleted from that table that was populated in the first part. It all seems relatively innocuous.

    But execution was getting worse over time. It had gone from acceptable, to less than ideal, to painful. And the guy who was asking me the question was a little stumped. He knew there was a Scan on the list of IDs – he was okay with that because it was typically only a handful of rows. Once it had been a temporary table, but someone had switched it to be a regular table – I never found out why. The plans had looked the same, he told me, from when it was a temporary table even to now. But the temporary table solution hadn’t seen this nasty degradation. He was hoping to fix it without making a change to the procedures though, because that would have meant source control changes. I’m hoping that the solution I recommended required a source control change too, but you never know.

    What I found was that the list of IDs was being stored in a table without a clustered index. A heap. Now – I’m not opposed to heaps at all. Heaps are often very good, and shouldn’t be derided. But you need to understand something about heaps – which is that they’re not suited to tables that have a large amount of deletes. Every time you insert a row into a heap, it goes into the first available slot on the last page of the heap. If there aren’t any slots available, it creates a new page, and the story continues. It doesn’t keep track of what’s happened earlier. They can be excellent for getting data in – and Lookups are very quick because every row is addressed by the actual Row ID, rather than some key values which then require a Seek operation to find them (that said, it’s often cheap to avoid Lookups, by adding extra columns to the Include list of a non-clustered index). But because they don’t think about what kind of state the earlier pages might be in, you can end up with heaps that are completely empty, a bunch of pointers from page to page, with header information, but no actual rows therein. If you’re deleting rows from a heap, this is what you’ll get.

    This guy’s heap had only a few rows in it. 8 in fact, when I looked – although I think a few moments later those 8 had disappeared, and were replaced by 13 others.

    But the table was more than 400MB in size. For 8 small rows.

    At 8kB per page, that’s over 50,000 pages. So every time the table was scanned, it was having to look through 50,000 pages.

    When it had been a temporary table, a new table was created every time. The rows would typically have fitted on one or two pages, and then at the end, the temporary table would’ve disappeared. But I think multiple processes were needing to look at the list, so making sure it wasn’t bound to a single session might’ve been useful. I wasn’t going to judge, only to offer a solution. My solution was to put a clustered index in place. I could’ve suggested they rebuild the heap regularly, which would’ve been a quick process run as often as they liked – but a clustered index was going to suit them better. Compared to single-page heap, things wouldn’t’ve been any faster, but compared to a large empty heap, Selects and Deletes would’ve been much faster. Inserts are what heaps do well – but that wasn’t a large part of the process here.

    You see, a clustered index maintains a b-tree of data. The very structure of an index needs to be able to know what range of rows are on each page. So if all the rows on a page are removed, this is reflected within the index, and the page can be removed. This is something that is done by the Ghost Cleanup process, which takes care of actually deleting rows within indexes to reduce the effort within the transaction itself, but it does still happen. Heaps don’t get cleaned up in the same way, and can keep growing until they get rebuilt.

    Sadly, this is the kind of problem that people can face all the time – the system worked well at first, testing didn’t show any performance problems, the scale of the system hasn’t changed, but over time it just starts getting slower. Defragmenting heaps is definitely worth doing, but better is to find those heaps which fragment quickly, and turn them into clustered indexes.

    TSQL2sDay150x150

    …but while I hope you never come across heaps that have grown unnecessarily, my biggest hope is that you be very careful about publicly discussing situations you’ve seen at customers.

    @rob_farley

  • This month’s T-SQL Tuesday post

    …is not here. It’s over at https://sqlperformance.com/2017/01/sql-performance/estimated-number-of-rows-to-be-read

    I write about the new EstimatedRowsRead property, and in particular, about how Microsoft responded so well to the Connect Item I created, requesting the feature.

    @rob_farley

  • Backups – are you missing the point?

    It’s a common question “Do you have a backup?” But it’s the wrong question. Very relevant for this month’s T-SQL Tuesday, hosted by Ken Fisher (@sqlstudent144), on the topic of backups.

    I think the question should be “Can you recover if needed?”

    We all know that a backup is only as good as your ability to restore from it – that you must test your backups to prove their worth. But there’s more to it than being able to restore a backup. Do you know what to do in case of a disaster? Can you restore what you want to restore? Does that restore get your applications back up? Does your reporting become available again? Do you have everything you need? Are there dependencies on other databases?

    I often find that organisations don’t quite have the Disaster Recovery story they need, and this is mostly down to not having practised specific scenarios.

    Does your disaster testing include getting applications to point at the new server? Have anything else broken while you do that?

    Does your disaster testing include a scenario where a rogue process changed values, but there is newer data that you want to keep?

    Does your disaster testing include losing an extract from a source system which does incremental extracts?

    Does your disaster testing include a situation where a well-meaning person has taken an extra backup, potentially spoiling differential or log backups?

    Does your disaster testing include random scenarios where your team needs to figure out what’s going on and what needs to happen to get everything back?

    The usefulness of standard SQL backups for some of these situations isn’t even clear. Many people take regular VM backups, but is that sufficient? Can you get the tail of the log if your VM disappears? Does a replicated copy of your database provide enough of a safety net here, or in general?

    The key issue is not whether you have a backup. It’s not even whether you have a restorable backup. It’s whether you have what you need to survive if things go south – whichever southbound route you’ve been taken down.

    @rob_farley

    TSQL2sDay150x150

  • How I prepare for a presentation

    Some people say I talk a lot – but I guess it depends on the context.

    Certainly, for many years, I’ve been fairly comfortable about standing up in front of people and explaining things. Whether it’s teaching a course, leading a workshop, presenting at a conference, or preaching at a church, it all has that same “I’m talking, and people are looking at me” feeling. I totally understand why people get nervous about it, and still have a certain about of terror that I suffer from before getting up to present. It doesn’t stop me doing it – I would happily present all the time, despite the fear factor.

    It’s almost a cliché, but the biggest advice I have for new speakers is to realise that the people in the room do actually want to hear what you have to say. They don’t want you to fail.

    …but there’s more to it than that.

    I can present on just about any topic, so long as I have time to prepare. That preparation time is NOT in creating an effective talk (although that’s part of it) – it’s in getting to know the subject matter well.

    Suppose I’m giving a talk about Columnstore indexes, like I just did at the PASS Summit. By all means, I want to craft a story for my presentation, and be able to work out which things I want to communicate through that story. If slides will work, then I’ll need to create them. If demos will work, then I’ll need to plan them too. But most of all, I want to get myself deep into Columnstore. I want to read everything there is on the subject. I want to create them, alter them, explore the DMVs about them, find ways to break them, and generally immerse myself in them. That way, I can speak confidently on the topic, knowing that I’m quite probably the most qualified person in the room to be up the front. I want to be explaining concepts that I know intimately.

    When people ask questions, there’s no guarantee that I’ll know the answer. At the end of my talk at the PASS Summit, someone asked me if I’d tried using columnstore indexes in a particular way, and I had to say no. She went on to tell me what she’d found, and it was interesting and piqued my curiosity for an area I hadn’t explored. Would I have been thrown if she’d asked me during the session, in front of everyone else? No – not at all. Because I felt comfortable with the depth of my knowledge.

    This applies just the same if I’m preaching in a church. If I’m preaching on a section of Galatians, I want to know that section backwards. I want to know the rest of the chapter, the rest of the book, what the rest of the Bible says on the matter, how it has applied in my own life, and what other people say on it too. I want to have a thorough picture of what God is saying to me, and to the rest of the church, through that passage.

    When I get stuck in my words, and stumble in some way, I need to know the topic well. I will have a bunch of sound bites that I’ve rehearsed, and expect to explain things using particular phrases. But those are the things that can disappear from my head when the nerves strike. My safety net is the deep knowledge of the subject, so that I can find a different way of explaining it.

    I don’t like giving word-perfect speeches. The idea of talking from a script that I need to stick to exactly doesn’t work for me – I get too nervous and wouldn’t be able to pull it off (although one day I will give stand-up comedy a try, which means having well-crafted jokes that need to be word-perfect to work). Knowing the material is way better than knowing the words, and for me is way less stressful.

    My advice to anyone is to get into public speaking. It’s a great way of stretching yourself. But do get into your topic as deeply as you can. If you’ve looked at something from a variety of angles, you will be able to explain to anyone.

    Big thanks to Andy Yun (@sqlbek) for hosting this month’s T-SQL Tuesday.

    TSQL2sDay150x150

    @rob_farley

  • PASS Summit 2016 – Keynote 2

    Thursday! Kilt day.

    We start with Grant Fritchey (PASS’ VP of Finance, in a kilt), talking about the various metrics of PASS, which show that the community is growing both numerically and graphically, reaching 87% of countries now. It’s good to know that things are going well. This is all public information, and I’m not going to go into the details here.

    He also announces that PASS will have a BA Day – Jan 11th in Chicago. More information on this will follow.

    Grant hands over to Denise McInerney (PASS’ VP of Marketing). She announces new branding for the PASS organisation – logo and website (website launching early next year) – and the dates for next year’s summit.

    David DeWitt, Adjunct Professor at MIT (previously of Microsoft Research) comes up. He’s going to talk about data warehouse technologies, including cloud and scaling. Amazon Redshift, Snowflake, and SQL DW.

    A great session, which will have helped a lot of people appreciate SQL DW more than ever.

    @rob_farley

  • PASS Summit 2016 – Blogging again – Keynote 1

    .So I’m back at the PASS Summit, and the keynote’s on! We’re all getting ready for a bunch of announcements about what’s coming in the world of the Microsoft Data Platform.

    First up – Adam Jorgensen. Some useful stats about PASS, and this year’s PASSion Award winner, Mala Mahadevan (@sqlmal)

    There are tweets going on using #sqlpass and #sqlsummit – you can get a lot of information from there.

    Joseph Sirosh – Corporate Vice President for the Data Group, Microsoft – is on stage now. He’s talking about the 400M children in India (that’s more than all the people in the United States, Mexico, and Canada combined), and the opportunities because of student drop-out. Andhra Pradesh is predicting student drop-out using new ACID – Algorithms, Cloud, IoT, Data. I say “new” because ACID is an acronym database professionals know well.

    He’s moving on to talk about three patterns: Intelligence DB, Intelligent Lake, Deep Intelligence.

    Intelligence DB – taking the intelligence out of the application and moving it into the database. Instead of the application controlling the ‘smarts’, putting them into the database provides models, security, and a number of other useful benefits, letting any application on top of it. It can use SQL Server, particularly with SQL Server R Services, and support applications whether in the cloud, on-prem, or hybrid.

    Rohan Kumar – General Manager of Database Scripts – is up now. Fully Managed HTAP in Azure SQL DB hits General Availability on Nov 15th. HTAP is Hybrid Transactional / Analytical Processing, which fits really nicely with my session on Friday afternoon. He’s doing a demo showing the predictions per second (using SQL Server R Services), and how it easily reaches 1,000,000 per second. You can see more of this at this post, which is really neat.

    Justin Silver, a Data Scientist from PROS comes onto stage to show how a customer of theirs handles 100 million price requests every day, responding to each one in under 200 milliseconds. Again we hear about SQL Server R Services, which pushes home the impact of this feature in SQL 2016. Justin explains that using R inside SQL Server 2016, they can achieve 100x better performance. It’s very cool stuff.

    Rohan’s back, showing a Polybase demo against MongoDB. I’m sitting next to Kendra Little (@kendra_little) who is pretty sure it’s the first MongoDB demo at PASS, and moving on to show SQL on Linux. He not only installed SQL on Linux, but then restored a database from a backup that was taken on a Windows box, connected to it from SSMS, and ran queries. Good stuff.

    Back to Joseph, who introduces Kalle Hiitola from Next Games – a Finnish gaming company – who created a iOS game that runs on Azure Media Services and DocumentDB, using BizSpark. 15 million installs, with 120GB of new data every day. 11,500 DocumentDB requests per second, and 43 million “Walkers” (zombies in their ‘Walking Dead’ game) eliminated every day. 1.9 million matches (I don’t think it’s about zombie dating though) per day. Nice numbers.

    Now onto Intelligent Lake. Larger volumes of data than every before takes a different kind of strategy.

    Scott Smith – VP of Product Development from Integral Analytics – comes in to show how Azure SQL Data Warehouse has allowed them to scale like never before in the electric-energy industry. He’s got some great visuals.

    Julie Koesmarno on stage now. Can’t help but love Julie – she’s come a long way in the short time since leaving LobsterPot Solutions. She’s done Sentiment Analysis on War & Peace. It’s good stuff, and Julie’s demo is very popular.

    Deep Intelligence is using Neural Networks to recognise components in images. eSmart Systems have a drone-based system for looking for faults in power lines. It’s got a familiar feel to it, based on discussions we’ve been having with some customers (but not with power lines).

    Using R Services with ML algorithms, there’s some great options available…

    Jen Stirrup on now. She’s talking about Pokemon Go and Azure ML. I don’t understand the Pokemon stuff, but the Machine Learning stuff makes a lot of sense. Why not use ML to find out where to find Pokemon?

    There’s an amazing video about using Cognitive Services to help a blind man interpret his surroundings. For me, this is the best demo of the morning, because it’s where this stuff can be really useful.

    SQL is changing the world.

    @rob_farley

  • Passwords – a secret you have no right to share

    I feel like this topic just keeps going around and around. Every time I’m in a room where someone needs to log into a computer that’s not theirs, there seems to be a thing of “Oh, I know their password…”, which makes me cringe.

    I’ve written about this before, and even for a previous T-SQL Tuesday, about two years ago, but there’s something that I want to stress, which is potentially a different slant on the problem.

    A password is not just YOUR secret. It’s also a secret belonging to the bank / website / program that the password is for.

    Let me transport you in your mind, back to primary school. You had a club. You had a password that meant that you knew who was in the club and who wasn’t (something I’ve seen in movies – I don’t remember actually being in one). At some point you had a single password that was used by everyone, but then you found that other people knew the password and could gain entry, because you only needed someone to be untrusted for the password to get out.

    You felt upset because that password wasn’t theirs to share. It was the property of you, the club owner. Someone got access to your club when you hadn’t actually granted them access.

    Now suppose I’m an online retailer (I’m not, but there are systems that I administer). You’ve got a password to use my site, and I do all the right things to protect that password – one-way hashing before it even reaches the database, never even being able to see it let alone emailing it, and a ton of different mechanisms that make sure that your stuff is safe. You’ve decided to a password which you’ve generated as a ‘strong password’, and that’s great. Maybe you can remember it, which doesn’t necessarily make it insecure. I don’t even care if you’ve written it down somewhere, so long as you’re treating it as a secret.

    Because please understand, it’s MY secret too.

    If the password you use gets out, because maybe someone gets into your LastPass account, or maybe someone steals the PostIt you’ve written it on, or maybe you use that same password at a different site which then gets hacked…

    …then that other person has access to MY site as you.

    If that other person buys stuff from me as you, I might need to refund you for the money / credit / points you didn’t mean to spend. And if I’ve already sent the goods out, then that’s going to hurt me.

    If that other person does malicious things on my site because they’re accessing it as a privileged user, then that’s going to hurt me.

    Someone knowing the secret that I’ve worked hard to keep secret… that’s going to hurt me.

    I have no control over the password that you choose to use. But please understand that it’s not just YOUR password. Use something that is a secret between you and me. I will never know your password, but I want you to make sure that no one else ever does either. Don’t reuse passwords.

    @rob_farley

    Big thanks to Andy Mallon (@amtwo) for hosting this month’s T-SQL Tuesday.

    TSQL2sDay150x150

  • You’ve been doing cloud for years...

    This month’s T-SQL Tuesday is hosted by Jeffrey Verheul (@devjef) and is on the topic of Cloud.

    I seem to spend quite a bit of my time these days helping people realise the benefit of the Azure platform, whether it be Machine Learning for doing some predictions around various things (best course of action, or expected value, for example), or keeping a replicated copy of data somewhere outside the organisation’s network, or even a full-blown Internet of Things piece with Stream Analytics pulling messages off an Service Bus Event Hub. But primarily, the thing that I have to combat most of all is this:

    Do I really want that stuff to be ‘out there’?

    People are used to having their data, their company information, their processing, going on somewhere outside the building where they physically are.TSQL2sDay150x150

    Now, there are plenty of times when organisations’ server rooms aren’t actually providing as much benefit as they expect. Conversations with people quickly help point out that their web site isn’t hosted locally (I remember in the late ‘90s a company I was at making the decision to start hosting their web site at an actual hosting provider rather than having every web request come in through the same modem as all their personal web browsing). Email servers are often the next to go. But for anyone working at home, the server room may as well be ‘the cloud’ anyway, because their data is going off to some ‘unknown’ place, with a decent amount of cabling between where they are and where their data is hosted.

    Everyone’s photos are stored in ‘cloud’ already, where it be in Instagram’s repository or in something which is more obviously ‘the cloud’. Messages with people no longer just live on people’s phones, but on the servers of Facebook and Twitter. Their worries and concerns are no longer just between them and their psychiatrist, but stored on Google’s search engine web logs.

    The ‘cloud’ is part of today’s world. You’re further into it than you may appreciate. So don’t be afraid, but try it out. Play with Azure ML, or with other areas of Cortana Intelligence. Put some things together to help yourself in your day-to-day activity. You could be pleasantly surprised about what you can do.

    @rob_farley

  • The Impact of Compression Delay in Real-time Operational Analytics

    I have a session coming up at both the PASS Summit in October and the 24HOP Summit Preview event in September, on Operational Analytics. Actually, my session is covering the benefits of combining both In-Memory and R into the Operational Analytics story, to be able to see even greater benefits…

    …but I thought I’d do some extra reading on Real-Time Operational Analytics, which also suits this month’s T-SQL Tuesday topic, hosted by Jason Brimhall (@sqlrnnr). He’s challenged us all to sharpen our skills in some area, and write about the experience.TSQL2sDay150x150

    Now, you can’t look at Real-Time Operational Analytics without exploring Sunil Agarwal (@S_u_n_e_e_l) ’s excellent blog series. He covers a few things, but the one that I wanted to write about here is Compression Delay.

    I’ve played with Compression Delay a little, but I probably haven’t appreciated the significance of it all that much. Sure, I get how it works, but I have always figured that the benefits associated with Compression Delay would be mostly realised by having Columnstore in general. So I was curious to read Sunil’s post where he looks at the performance numbers associated with Compression Delay. You can read this yourself if you like – it’s here – but I’m going to summarise it, and add some thoughts of my own.

    The thing with Operational Analytics is that the analytical data, reporting data, warehouse-style data, is essentially the same data as the transactional data. Now, it doesn’t look quite the same, because it’s not been turned into a star-schema, or have slowly changing dimension considerations, but for the purposes of seeing what’s going on, it’s data that’s capable of handling aggregations over large amounts of data. It’s columnstore.

    Now, columnstore data isn’t particularly suited to transactional data. Finding an individual row within columnstore data can be tricky, and it’s much more suited to rowstore. So when data is being manipulated quite a lot, it’s not necessarily that good to be using columnstore. Rowstore is simply better for this.

    But with SQL 2016, we get updateable non-clustered columnstore indexes. Data which is a copy of the underlying table (non-clustered data is a copy – clustered data or heap data is the underlying table). This alone presents a useful opportunity, as we can be maintaining a columnstore copy of the data for analytics, while handling individual row updates in the rowstore.

    Except that it’s a bit more complicated than that. Because every change to the underlying rowstore is going to need the same change made in columnstore. We’re not actually benefiting much.

    Enter the filtered index. With a flag to indicate that frequent changes for that row have finished, we can choose to have the columnstore copy of the data only on those rows which are now relatively static. Excellent. Plus, the Query Optimizer does some clever things to help with queries in this situation.

    But many systems don’t have a flag like that. What then?

    Well, one nice option is to consider using Compression Delay.

    Compression Delay tells our columnstore index to delay compressing the data for some period of time. That is, to not turn it into proper columnstore data for a while. Remember I said that columnstore data doesn’t enjoy being updated much – this is to prevent that pain, by leaving it as rowstore data for a while.

    I haven’t really explored this much myself yet. I have a few simulations to run, to see what kind of performance gains can be had from this. But Sunil’s experiments saw a 15% improvement on the OLTP workload by choosing an appropriate Compression Delay, and that sounds pretty good to me.

    I feel like there’s so much more to be explored with these new technologies. Having that flag to indicate when a row can be pulled into a filtered columnstore index seems really useful. Compression Delay seems great too, and in many ways feels like a nicer solution than ending up with a filtered index that might not catch everything. Compression Delay to me feels like having a filtered columnstore index that uses getdate() (which I think would be a lovely feature), although it’s not quite same.

    So I’m going to keep playing with this, and will be teaching you plenty of information at both the upcoming events. I could present a lot of it now, but I would prefer to deepen my understanding more before I have to stand in front of you all. For me, the best preparation for presentations is to try to know every tiny detail about the technology – but that’s a path I’m still on, as I continue to sharpen.

    @rob_farley

  • Finally, SSMS will talk to Azure SQL DW

    Don’t get me started on how I keep seeing people jump into Azure SQL DW without thinking about the parallel paradigm. SQL DW is to PDW, the way that Azure SQL DB is to SQL Server. If you were happy using SQL Server for your data warehouse, then SQL DB may be just fine. Certainly you should get your head around the MPP (Massively Parallel Processing) concepts before you try implementing something in SQL DW. Otherwise you’re simply not giving it a fair chance, and may find that MPP is a hindrance rather than a help. Mind you, if you have worked out that MPP is for you, then SQL DW is definitely a brilliant piece of software.

    One of the biggest frustrations that people find with SQL DW is that you need (or rather, needed) to use SSDT to connect to it. You couldn’t use SSMS. And let’s face it – while the ‘recommended’ approach may be to use SSDT for all database development, most people I come across tend to use SSMS.

    But now with the July 2016 update of SSMS, you can finally connect to SQL DW using SQL Server Management Studio. Hurrah!

    …except that it’s perhaps not quite that easy. There’s a few gotchas to be conscious of, plus a couple of things that caused me frustrations perhaps more than I’d’ve liked.

    First I want to point out that at the time of writing, SSMS is still not a supported tool against PDW. You’ve always been able to connect to it to write queries, so long as you can ignore some errors that pop up about NoCount not being supported, but Object Explorer simply doesn’t work, and without Object Explorer, the overall experience has felt somewhat pained.

    Now, when you provision SQL DW through the Azure portal, you get an interface in the portal that includes options for pausing, or changing the scale, as per this image:

    image

    And you may notice that there’s an option to “Open in Visual something” there. Following this link gives you a button that will open SSDT, and connect it to SQL DW. And this works! I certainly had a lot more luck doing this than simply opening SSDT and putting in some connection details. Let me explain…

    In that image, notice the “Show database connection strings” link. That’s where you can see a variety of connection strings, and from there, you can extract the information you’ll need to make a connection in either SSDT or SSMS. You know, in case you don’t want to just hit the button to “Open in Visual something”.

    image

    When I first used these settings to connect using SSDT (rather than using the “Open in…” button), it didn’t really work for me. I found that when I used the “New Query” button, it would give me a “SQLQuery1.sql” window, rather than a “PDW SQLQuery1.dsql” window, and this wasn’t right. Furthermore, if I right-clicked a table and chose the “View Code’ option, I would get an error. I also noticed that when I connected using the “Open in…” button, it would tell me I was connected to version 10.0.8408.8, but when I tried putting the details in myself, it would say version “12.0.2000”. I’ve since found out that this was my own doing, because I hadn’t specified the database to connect to. And this information turned out to be useful for using SSMS too.

    There is no “Open in SSMS” button in Azure. But you can connect using the standard Connect to Database Engine part of SSMS.

    image

    And it works! Previous versions would complain about NOCOUNT, and Object Explorer would have a bit of a fit. There’s none of that now – terrific.

    And you get to see everything in the Object Explorer too, complete with an icon for the MPP database. But the version says 12.0.2000.8 if you connect like this.

    image 

    To solve this, you need to use the “Options >>>” button in that Connect to Server dialog, and specify the database. Then you’ll make the right connection, but you’ll lose the “Security” folder in Object Explorer.

    image

    clip_image001

    Now, it’s not perfect yet.

    When I look at Table Properties, for example, I can see that my table is distributed on a Hash, but it doesn’t tell me which column it is. It also tells me that the server I’m connected to is my own machine, rather than the SQL Azure instance.

    image

    I can see what the distribution column is within the Object Explorer, because it’s displayed with different icon, but still, I would’ve liked to have seen it in the Properties window as well. It’s not going to get confused by having a golden or silver key there, as it might in a non-parallel environment, because those things aren’t supported. If they do become supported, I hope they manage to come up with another way of highlighting the distributed column.

    image

    One rather large frustration is the very promising link on the database to “Open in Management Portal”,

    image

    , which opens a browser within SSMS (not exactly my preferred browser, but it seems like a good use for that feature). I’m okay with this, but following the link to the Query Performance Insight page, I’m immediately disappointed:

    image

    I get that SSMS doesn’t host the most ideal browser for this kind of thing, and that I’m probably going to be running a separate browser anyway, but I’m would like this to be addressed in a future update.

    Probably my biggest frustration is that when I start a new query, I get this set of warnings:

    image

    …which suggests that it doesn’t really know about SQL DW. I can tell them to be suppressed, so that the dialog doesn’t re-appear, but I don’t like the feeling that the system is attempting them at all.

    It’s certainly a lot less painful than it was in the past though. I love the fact that I can use the Object Explorer window. I love that I can script objects, in a way that feels way more natural to me than in SSDT.

    This is SSDT:

    image

    This is SSMS:

    image

    , although oddly the SSMS script includes the USE statement at the top, which isn’t supported in SQLDW (I’m sure this won’t be the case for much longer).

    image

    Overall, I’m really pleased that the team has put things in place to make SSMS talk to SQL DW at all. I was beginning to think that SSMS wasn’t going to come to this particular party. This release, despite having some way to go just yet, suggests that I’ll soon be using SSMS more when I’m using SQL DW.

    And therefore, this topic worthy for Chris Yates’ T-SQL Tuesday blog party this month – celebrating the new things that have come along in the SQL world recently.

    TSQL2sDay150x150

    @rob_farley

More Posts Next page »

This Blog

Syndication

News

News? Haven't you read my blog?

My Company


Can't find something?

Contact Me

IM: rob_farley@hotmail.com
Twitter: @rob_farley
Skype: rob_farley
E: rob_farley@hotmail.com

MVP (SQL Server)




Certifications








Adelaide SQL UG

Privacy Statement