THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

  • Post-PASS Summit Resolutions 2017

    Another PASS Summit has passed, and I have had a bit of time to consider what I want to do with the time of the week that "normal" people call "free time." While during November and December, I do my best to actually take some of that, I kind of find the whole thing kind of weird. Watching TV, going to movies, decorating the house for the 4 holidays (starting with Halloween, ending on New Year's), and otherwise just enjoying the time when I am not being explicitly paid to be at my desk for my day job is equal parts awesome, exhilarating, boring, and oddly enough sad.

    I think sad is the one that is most interesting to me. Sad because so many people do this ALL of the time. Drag themselves to work, and then just plop down. Not spending time improving themselves, benevolently helping others, or doing anything interesting (and interesting would include just hanging out with your kids/family.) The opposite is true also, in that some people never stop doing something. I find sometimes that I am so involved in this, that, or the other thing that I never just rest and enjoy anything.

    This has been a driving factor in considering what I want to focus on this year.

      1. Devalue the MVP Award as one of my motivators - Wow, I had a hard time even saying this aloud. It feels a bit like going to your employer and saying, "you know, keeping my job is not as much my motivator." Yet, sometimes that is the problem with the MVP Award. Even though it isn't a job, the rewards (most of them with no monetary value at all) are pretty awesome, which frankly leads to a big issue: you..never..want..to..lose..it.

      And I am NOT saying I don't want to keep it. I am super proud of my MVP status, and will not "give it up" or stop reporting what I have done to them in hopes of adding layer of award certificates to my tidy little stack.  And realistically, I probably would have accomplished the same things I have done anyhow (blogged, authored and tech edited books, spoken at some user groups/conferences, etc.), but at times, I have looked at my list of accomplishments as a tally to grade myself against what I perceived as the "line to gain", not as what I can do to help more people, including the people I work with. 

        2, Be more value, meaning: learn more, build more, and blog more - I have a 1000 tool ideas in my head, and another 1000 things I want to learn to make the issues that I (and so many people) deal with daily; easier. So in my upcoming year, when I am sitting down to provide #sqlfamily value, my plan is to build and publish tools of useful value. During my work day, I struggle with a lot of stuff. Designing, writing, testing, and (worst of all) supporting software. I need to make more tools to make my day job easier, and share those concepts in my blogs.

        3. Move away from SQLBlog.com - Yeah, another hard one, as I have been with SQLBlog for a very long time, and it is nice to just let someone else do the hosting with very little pressure the amount or type of content I produce. Which, for the most part I am working out a deal to do (though there will be more pressure to do better editing, and more frequent content.) I do plan to host my own blog somewhere, to cover personal schedule along with occasionally opinionated stuff, but my technical work will likely be going on a different group blogging site, which leaves me more time to add value and not futzing around trying to set up software (I am terrible at that! Everyone who asks my opinion on fixing their computer gets my standard answer… Either 1. Rebuild it 2. Buy a new one.)

          4. Somewhat to the first point, get my work-life balance in check - I work too much. At times I don't mind. I make a few extra bucks now and again with my writing, and a bit less over the years with speaking. And I have kept some of my skills state of the art for 20+ years now. Yet, the problem is that I never let myself stop, unless I am on a vacation to Disney World (and even then I will occasionally take time to attend a meeting…I have carried a tablet around EPCOT a few times to attend a few technology conference meetings, in fact, but usually I can turn off my brain there.) When I am anywhere else, I usually fade into doing something SQL related (Like I had intended to just watch the Titans at Steelers from the couch tonight, but after I did some support work, I was at my computer already, so…I need to finish this blog.)

          My other big issue is that I am either obsessed with completing a task, or a major procrastinator. Generally, if there is a real deadline, I will work on something non-stop. I am here writing this because I can't wait too long or the topic will be stale. On the other hand, I referenced the 1000 tool ideas previously, and I have even more half written blogs that I have piddled on for years. There is a proper balance between: "relax today and finish up tomorrow" and "why put off to next week what you can put off until next century."

          Lastly, since early in 2017 I have been going to the gym about an hour a day, and I have no desire to stop this practice at all. So family time first, then gym, proper rest, then #sqlfamily (well, at least most of the time.) 

            5. Either adjust how I deliver sessions or quit - I will write more on this in a later blog, but suffice it to say that even after 16 years of conference speaking, I still have a few major flaws I cannot get around. My last session at the PASS Summit highlighted the issue yet again, and I will blog about that later this year. Part of it is that tend to take a beginner sounding topic (like Row Level Security) and turn it into a slightly over intermediate session that covers more than what a lot of attendees are ready to hear. It is the kind of thing that works when writing for a technical audience, but speaking requires a different sort of person skills than I generally have mastered.

            Honestly, I could just as easily go to the same number of SQL Saturdays and enjoy them even more without the pressure of writing and preparing for sessions (which takes a LOT more time than blogging about the same topic, and reaches a lot fewer people too).

            6. Regularly participate in T-SQL Tuesday - I did write one T-SQL Tuesday post this year, but I don't know why I don't do more. Maybe it is the deadline/procrastination thing. Or maybe I just don't pay enough attention. I know that is the case, I am not on Facebook, and I only check Twitter infrequently.

            7. Keep involved with PASS - Absolutely with the local Chattanooga User Group. I am a big fan of Larry Ortega, and I want to do more to help out with the UG and the SQL Saturday.

            Secondly with the Program Committee if they want me back again next year. I had a great year working on the committee again this year, even if we didn't do any of the abstract review things we had planned.

            I may consider starting a tiny user group in my area, as a compliment to the group in Chattanooga (which is a 40 minute drive from my house), if there are people who can't do the night meeting (or want to meet twice), particularly in Cleveland, TN or further north. I will NOT do anything to detriment of the main Chattanooga group however.

            8. Plan to make easier resolutions for next year - Like really, you know. These are just too much for my brain to handle. That MVP one is really close to be deleted as I finish this post.

            Well, here's to the 2017-2018 PASS year. Next year, I have no idea where I will post my resolutions, but I will definitely still do them, maybe I will just make them a page on my website. I definitely like doing them like this because of two things. One, several people whom I respect liked that I did this. Two, because putting it out there in public reminds you that you have to follow through. If I just wrote them and tacked them on my wall, I would ball it up and hit the waste bin next to my chair in a week or two.

            • PASS Summit 2017 Followup

              Ah, back home in my comfy work chair, feet up surrounded by the warm glow of computer monitors with the football game on (and at the right time, not 3 hours too early!) It was quite a week, and I learned more stuff this year than normal, and have a few nuggets I really should have known before, that I know now. I will say that there was an interesting feeling this year, like maybe there were less people than had been in the past. It also may have been that I just went to Ignite, and that place was packed. Frankly though, I really enjoyed myself, and felt like the conference was pretty awesome.

              Here are a few highlights of the trip for me…

              • The keynotes:

                • Rimma Nehme's Keynote - I love architecture sessions, and this one was excellent. Not so high or low level that I started working on saving the princess (again) in Mario Run on my phone. The insights into Cosmos DB were understandable, and useful for more than just Cosmos DB. I love these keynotes done by academic types, particularly ones who can speak at my level (she called it momsplaining :) without making me feel dumb. If you don't believe me, here is the link to watch the whole thing: http://www.pass.org/summit/2017/PASStv.aspx?watch=7SRi9vyDtWY 
                       
                • Rohan Kumar's Keynote - He introduced SQL Operations Studio, which is an interesting, cross platform management tool for Microsoft's data platforms. There were demo's of adaptive query processing, and several other things that I won't try to cover. Two ways to catch up here, search for RohanKSQL on Twitter, where you will see the horde of folks (including myself), who tweeted the keynote; or just watch it for yourself: http://www.pass.org/summit/2017/PASStv.aspx?watch=NGbk9XGWTHI.
                       
              • The Microsoft sessions - Some years, we get a lot of sessions from Microsoft, and at times, the speakers are clearly very knowledgeable technical folks, but not necessarily great teachers. I did not attend anything like this this year. I attended two panels where they answered our questions without hesitation (well most of them unless one of us MVPs were asking for stuff for the 100th time just to see if they would answer them differently in a non NDA setting… they did not). One of these panels had about 15 folks who know the product through and through (because they wrote/manage at least one part of it). The other had 5 people involved in building the Memory Optimized table feature.

                I went to two specific feature sessions from Microsoft. A session on Graph database on the last day/last time period by Shreya Verma and Arvind Shyamsundar that was very much worth staying the extra session after I finished mine; and a session by Michael Rys on Modernizing ETL with Azure Data Lake which may have finally inspired me to dig in and learn U-SQL.

              • The non-Microsoft sessions - Since I went to 4 Microsoft sessions, and had a lot of conversations throughout the week, that didn't leave a ton of time for other sessions. I went to sessions on Indexing from Kendra Little, and sessions from RedGate and Jens Vestergaard on Dev Ops from a few perspectives. All of these sessions were top notch, and I learned something in every one.  DevOps, Continuous Deployment and Integration are things that I am very keen to get into my organization.

                I stopped by one timeslot of speaker idol and am just glad I have spoken in the past. That process seems fun, but doing a 5 minutes session and being judged on it immediately in front of the room full of people. Yikes.

              • The conversations - Sometimes I think that the best part about the PASS Summit for me is reconnecting with/meeting new people. I met a ton of people, some for the first time, some for the first time again. I had a few meetings over lunch with a few of the people I work with at Apress and Redgate, and many other discussions on Program Committee stuff, just about any time I wasn't in a session, I was chatting about something SQL Server related. It is a shame really that we can't just get all of the regulars at SQL Saturdays/User Groups a free pass (and hotel and airfare) so everyone could be there. There were a lot of people who couldn't make it for many reasons (for example, Denny Cherry who had quite an outpouring of well wishes from folks at the Summit, including myself, many others and even Rohan in his keynote.)

              • The food - Well, in this case, I didn't actually eat a bite of any of the meals at the Summit. But I did eat some great food around the venue, three places in particular:

                • Mod Pizza - across from the Hilton. Buffalo sauce. On a pizza. Marriage made in Heaven. (Actually had this twice because I had to urgently go back and work one night and it was all that was on my way back to my room…that's my excuse and I am sticking to it). 
                • Ruth's Chris - Nice filet and their Brussel sprouts with honey and bacon were excellent. Their Brussel sprouts are what got me started on my love of Brussel sprouts. (Yes, I know it is a chain and overpriced… Steak was excellent!)
                • Dragonfish Asian Cafe - Had a nice lunch there with their seafood bento box.
                • Honorable mention: The bacon for breakfast at the Hilton Executive lounge was excellent.
                  Dishonorable mention: The lunch at the Cheesecake Factory was pretty weak, though the Nashville Hot Chicken nuggets were not terrible. Nothing compared to the meals I had when I was passing through Nashville at Chef Big Shakes and Prince's Hot Chicken Shack South.

              • My session - It may be my best session I have ever done (the last time I said that 15 years ago I was wrong, but this time for sure). I felt comfortable, well enough rehearsed, and finally had what felt like a good balance of slides to demo. 10% slides to intro the basic concept and structure of Row Level Security, then all of the details (like needing to cover multiple tables, or slow functions), in the code. And I even finished out the session and people realized it. My last few sessions I have given, I didn't wind up/wrap up well enough so people ended up staring at me, until I turned off my computer and started packing up.

              All in all, I had a great time, and learned quite a few things I didn't know. Some years, learning stuff has not necessarily occurred, particularly when I was giving a session (mostly because I spent too much time preparing my session…) Will I be back next year at the Summit, 98% sure the answer is yes.

              Beyond PASS, my week started on Tuesday at a small conference (referred to as the Freecon) by Jason Brimhall and Wayne Sheffield on Tuesday. I gave a session on hierarchies, and saw some great sessions by Gail Shaw, Jimmy May, Andy Leonard and Wayne (Jason's session was earlier than I could drag myself out of my room.) If (like me), your budget doesn't fit adding a full precon (which you absolutely should consider if possible.  There were a bunch of great ones where you can get a deep, full day session on a number of great topics, and I will go out on a limb and guarantee that the same will be true next year, and the next, and…), getting another day of sessions was a great way to kick off things. It was here where I realized a mistake I have made for years in Gail's session. Altering the value of a parameter that is passed in, causing the plan to be substandard. That made the day worth it, even if I had left Seattle then. 

              In my session, beyond introducing hierarchies and the new graph features, I introduced the world to Dr Squirrel, my alter ego that comes from my years of being called this name from people who have no idea what SQL is. It was worse when I had DRSQL as a license plate, for sure (Dr Squeal was the other alternative.) I wore the lab coat, and introduced my sidekick, Acornsie, a small squirrel who keeps me on track when I am presenting/working. It will be interesting to see if Acornsie has legs and actually ends up travelling with me.

            • PASS 2016 Resolutions Followup

              Wow, it's been over a year now since I made my second round of foolish promises and plans. Now I take a quick pass at saying how I did at these promises before the major annual gathering of SQL Server loving needs called the PASS Summit.

              Like all resolutions, you win some, you lose some. But I have stayed busy, so busy that I didn't finish this until the night before the Summit starts, at midnight Pacific Time (which is 3 hours off of normal time!)

              1. Keep involved locally.

              I was definitely able to stay involved locally (even after moving), attending most of the Chattanooga User Group meetings (speaking at the last one of the year), and speaking at the Chattanooga SQL Saturday. Larry has a great group, with some great leadership in place. I don’t plan any change to my Chattanooga involvement, unless needed. I do plan on trying to expand into a new group, possibly in Cleveland TN or south of Knoxville (I am about an hour south of that area), depending on if I find others interested who could not ever make it to Chattanooga. A lot depends on whatever happens that is unknown to me now, and if I can find anyone else in the areas north by northeast of Chattanooga that want to start something up.

              2. Write more.

              Well, based on the title, I was definitely able to meet that one. Most of my free time has been consumed with writing projects… just mostly not my blog, and certainly not the DMV book update I was hoping to.

              I finished my database design book, and wrote half of an exam ref book for the 70-762 exam for Microsoft (Stacia Varga wrote the other half). Since those projects ended, I have written 5 or 6 articles about SQL Prompt for Red Gate (and am really big fan of the tool, more-so after digging deep into its features) and have been tech editing a couple of books. I have blogging plans to make for next year, but more on that after PASS Summit is over.

              3. Continue to speak, but do so "economically."

              I have spoken quite a few times this year, at several SQL Saturday events, as usual, but at more local user groups than in the past. Nashville, Richmond, and Birmingham, not to mention Chattanooga (SQL and the .Net group also). Early next year I will be speaking at Hampton Roads.

              A few new experiences this year, Scenic City Summit in Chattanooga was a great experience. And tomorrow (It was next week when I made my first draft!), I will be one of the speakers at Jason Brimhall and Wayne Sheffield’s Freecon.

              I did not put in an abstract at the Summit this year, as I was involved with the program committee in a way that was different from last year that I just didn’t want to make it look conspicuous in any way. I still have memory of working on the group and getting selected while a few friends of mine at dinner one night did not. At least one of those people were better speakers than I will ever be (though I don't feel inadequate, just not "great", which I felt this other person was. I didn't like that feeling, even though I had little to do with the choice in question.

              However, I did end up getting a speaking slot at the Summit when a number of people dropped out and they needed a replacement (and they knew I was already there on their nickel :)). I can't wait to see how this experience affects my feelings on speaking at the Summit/other larger events. They promised me the smallest room, and I have loosened up my speaking style in the past few years by doing more simple, straightforward, demo oriented sessions.

              4. Start recording of my old presentations and put on (possibly) YouTube.

              Nope. Still haven’t done this. However, I do have some plans this year for some video training. Either on my own, or something similar. More on that when I make next year’s ridiculous plans, we, resolutions.

              5. Upgrade my website to advertises my talents better.

              My drsql.org site was upgraded this year, since my Office 365 site was about to be yanked from me. But it still needs a lot of work (which I will probably make a lot of crazy promises about in a few weeks!). Really the best part of the new site is that my downloads are all in Dropbox, which is really easy to maintain and get slides up right after I finish a conference.

              6. Learn something new.

              When I first heard about the new Graph feature in SQL Server .Next (now 2017), I was keen to eat it up and be one of the people who knew it best. But once I saw it, it is a bit underwhelming as a current feature so I didn't get into it until recently, but the future is going to be big, I believe and have started some design work to use graph objects soon.

              I am doing my presentation at the Freecon on hierarchies, and I will focus about 1/3 of my time on graphs, and using relational tables and the graph feature to implement a graph. But while some aspects of the feature are awesome, it is not currently a tremendous improvement over existing ways to implement a graph or tree, except in some use cases.

              7. Continue to stay involved with the PASS Program Committee reviewing and commenting on abstracts for people before they submit them.

              As PASS often does, things changed this year. Rather than enhancing the way we worked on abstracts to comment and edit them for people, that whole process was axed. I did stay active with the committee (link here), and there will be two sessions at the summit from 12-1 in the Community Zone where we will talk with people about abstracts and pretty much anything that is desired about the process.

              8. Get more fit. 

              Of all of my resolutions, this is the one that has seen the most movement. Though I have not lost much weight this year, I am a lot more fit, thanks to the move to Cleveland TN. Two big reasons, first, there is only one restaurant on OpenTable within a 30 mile radius of my house, and it is too expensive except for special occasions. There is also no great soul food places or hot chicken joints around either.

              The second is the YMCA that is only 4 miles from my new house. ~1 hr every day has really kicked my fitness up a notch. I am constantly tired and hurting from my workouts, but at least I will probably be around a few extra years (or at least it will feel like it!)

              I will use a cane most of the time at the Summit, due to some ongoing knee issues and weakness in the hip that kept me from PASS Summit 2 years ago. 3-5 miles of walking and I start looking like a Weeble Wobble but I can indeed fall down.

              9. Keep stretching my comfort zone.

              The kilt will be making the trip to Seattle again, as will my new Halloween Costume… I have been thinking about this one all year, and while my family will find it humorous for the shock value of seeing me wearing a squirrel mask, I had hoped that my SQL friends would find it even more humorous. Alas, the mask didn't make the trip, but the rest did and will be worn. I believe this will be my first costume I have worn (at least the first that is photographed) since I was a little guy shaking down my neighbors for lunch candy 40 years ago.

              10. Start working on following through on my secret resolution from last year.

              My secret resolution is still just an idea, and perhaps it is an idea someone else may beat me to if I share it. The idea is to start a new conference (series?) of just advanced and even expert level sessions (SQL at least, but other disciplines too). I like, nay, love, SQL Saturday events, the PASS Summit, and all the many conferences there are, but often times I don’t learn more than a nugget or two about the stuff I am interested in, and sometimes that is from writing my session, or jawing around in the speaker room.

              I don’t know if I would present or not, but starting a session expecting that everyone already knew the basic - intermediate material would be a lot of fun.

              11. If I am not having fun, disregard any of the previous items except #8 (because that one is truly important).

              There’s an old saying that no one ever says on their death bed “I wish I worked more.” That is baloney. For a few reasons:

              1. You might not be on your death bed if you had worked instead of sleeping in everyday and had money to afford proper food, shelter, etc.
              2. Work is fun, especially when it benefits others.
              3. We have no idea what everyone in the world has said. That is statistically impossible!

              The thing no one has ever likely said is “I wish I had worked so much that I didn’t do anything other than work”. Sometimes all of the speaking and writing and being away from family is tiring as can be. But I still love it, at least after preparing for it, and still look forward to sitting in this chair typing away, or sleeping the sleep of the tired in a hotel bed between conference days; at least most of the time. Sometimes it is a drag, I won’t lie. But 90% of the time, I still enjoy working on the latest feature in SQL Server, and the other 10% is still valuable to keep me employable if my 20 yrs at CBN doesn’t become at least 37 (Full retirement age :).

              ----

              Well, that's it…So many ideas swirling around my head for projects that I have always wanted to do (for example modeling and implementing a database for conference registrations and speaker management…) but I have another project I plan to do next year that I will spill the beans on in PASS year's resolutions, unless I get inspired (or offered) something better to do!

            • Preparing for my Freecon Session

              Deep breath. There is two weeks and two day before the PASS Summit. And like the past few years, I didn't submit any sessions, so those three days will be nonstop learning and chilling with all of the SQL family who makes it to Seattle.

              But two weeks and one days from now is the "Seattle SQL Pro Workshop 2017", AKA the Seattle Freecon 2017. It is a one day event, with a slate of speakers that I would be honored to be listed amongst, even if it was just the list of people at the PASS Summit. They are: Andy Leonard, Jimmy May, Gail Shaw, Wayne Sheffield and Jason Brimhall. Nice.

              There are sessions on SSIS, SQL Injection, Parameter Sniffing, Monitoring IO, and then my session, which is very much T-SQL coding oriented.

              The abstract is here:

              ---------------------

              Implementing a Hierarchy in SQL Server

              One of the more common data structures you will come across in the real world is a hierarchy, sometimes a single parent "tree" or more commonly a multi-parent "graph". Many systems will implement the obvious examples, such as a corporate managerial structure or a bill of materials, however, it turns out that almost any many-to-many relationship can be treated as a hierarchy (for example, a customer to the products they purchased, or the relationship of actors to movies they’ve been in). In this session, we’ll discuss several implementations of trees and graphs using interesting algorithms and built in T-SQL features (CTEs and SQL Graph) that you can use to optimize your hierarchy implementations and put into practice immediately (along with ready-made T-SQL example code.)

              ---------------------

              I have been interested in hierarchies since I attended Dr Rozneshtein's T-SQL training class back late last century. His book: The Essence of SQL : A Guide to Learning Most of SQL in the Least Amount of Time, was one of my earliest influences on writing excellent SQL, favoring set-based processing and limiting the number of passes you need to take through a set of data to provide the most performance, particularly in reporting type queries where you are processing a large amount of data. During the class, he gave us the start of a book he was working on, named "Tree and Graph Processing in SQL", and while I didn't do much with it back then, it started to foster interest in the subject.

              Then a few years back, when I was writing my 2012 edition of my book, I wanted to expand my introduction of hierarchies, so I read a lot of current material on the subject, notably Joe Celko's "Joe Celko's Trees and Hierarchies in SQL for Smarties", some of Paul Nielsen's material in his last involvement in the SQL Server Bible series, and lot of online articles. From this, I was able to put down some conceptual material on several interesting ways to implement a hierarchy.  Not long after, I created a presentation and a lot of sample code to implement and test different methods of implementing tree hierarchies. Most of this presentation will be the same as it was back then (assuming the code works and I can remember how it still works, naturally :)).

              What was a very minor part of the presentation last time was graphs. I implemented a simple graph using relational tables, using a recursive CTE for processing. I will extend that example using the new graph tables feature in SQL Server 2017, and extend the example to show the features that the graph tables give you.

              Honestly, it is way too much to do in an hour, even if I didn't cover graphs, so I will economize as much as possible, but a load of code will accompany the download that you can download even if you don't make it to the Freecon from my presentation page: http://www.drsql.org/presentations. Of course, the code will be a lot easier if you attend!

              If you failed to click the link earlier, go to the Seattle Freecon 2107 Eventbrite link now. (And before you cry "I thought you said free!", the training and snackage are free, there is a charge for lunch, which is not required but highly recommended…)

            • How the rowversion datatype works when adding and deleting columns

              For years, I had thought (and was probably taught in SQL.AlongTimeAgoInAPlaceFarFarAway) that the timestamp column (well before rowversion was a thing,) was not guaranteed to be an ever increasing value. But this is not the case.

              In BOL (https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql)  it states:      

              "The rowversion data type is just an incrementing number..."

              This makes it useful for determining rows that have changed, because it it automatic and the user cannot override the value in the column. However, there is a major concern when you use rowversions, and that is what happens when you change the structure of the table, and expect the consumer to see that change. The problem is that when you change the structure of the table, the rowversion will not be set (except when adding a new rowversion column.)

              So let's create a new database and a set of tables:     

              CREATE DATABASE TestRowVersion;
              GO
              USE TestRowVersion;
              GO      
                     
              Now, lets check the status of the two functions that we can use to see  the status of the rowversion values: 

              SELECT @@DBTS AS DBTS, --the last rowversion that has been used
                     --the next rowversion that will be used
                     MIN_ACTIVE_ROWVERSION() AS MIN_ACTIVE_ROWVERSION;

              This returns and will always on a new database, at least always has every time I have ever built a new db and checked (which over the years is more often than I care to remember.)

                   
              DBTS               MIN_ACTIVE_ROWVERSION
              ------------------ ---------------------
              0x00000000000007D0 0x00000000000007D1

              Next, let us create a few new tables, two with rowversion columns:

              CREATE TABLE TestRowversion1
              (
                  TestRowversion1 int CONSTRAINT PKTestRowversion1 PRIMARY KEY,
                  RowVersion rowversion NOT NULL
              );
              CREATE TABLE TestRowversion2
              (
                  TestRowversion2 int CONSTRAINT PKTestRowversion2 PRIMARY KEY,
                  RowVersion rowversion NOT NULL
              );      
              CREATE TABLE TestNoRowversionYet
              (
                  TestNoRowversionYet int CONSTRAINT PKTestNoRowversionYet PRIMARY KEY
              );     
              GO      
                    
              The rowversion values have not changed (only using the @@DBTS from now on because it is usually what we want to see):      
               
              SELECT @@DBTS AS DBTS;

              You will see no change from the first execution.

              Now, lets add a few rows to the table with rowversions.     

              INSERT INTO dbo.TestRowversion1(TestRowversion1)
              VALUES(1),(2),(3);
              GO
              INSERT INTO dbo.TestRowversion2(TestRowversion2)
              VALUES(10),(20),(30);     
              GO      
               
              And now check the values:      
               
              SELECT *
              FROM    dbo.TestRowversion1;
              SELECT *
              FROM    dbo.TestRowversion2;

              SELECT @@DBTS AS DBTS;

              This will return:
                    
              TestRowversion1 RowVersion
              --------------- ------------------
              1               0x00000000000007D1
              2               0x00000000000007D2
              3               0x00000000000007D3

              TestRowversion2 RowVersion
              --------------- ------------------
              10              0x00000000000007D4
              20              0x00000000000007D5
              30              0x00000000000007D6

              DBTS              
              ------------------
              0x00000000000007D6

              Next step, let's add a new column to both tables to see what the effect is. In the first table, the column will be NOT NULL, with a default, and the second will be null.

              ALTER TABLE dbo.TestRowversion1
                  ADD NewNotNullColumn varchar(20) NOT NULL
                      CONSTRAINT DFLTTestRowversion1_NewNotNullColumn DEFAULT ('Not Null'); 
                  
              ALTER TABLE dbo.TestRowversion2
                  ADD NewNullColumn varchar(20) NULL;

                  
              Now, let's look at the data:

              SELECT *
              FROM    dbo.TestRowversion1;
              SELECT *
              FROM    dbo.TestRowversion2;

              SELECT @@DBTS AS DBTS; 

              You will see the following:      

              TestRowversion1 RowVersion         NewNotNullColumn
              --------------- ------------------ --------------------
              1               0x00000000000007D1 Not Null
              2               0x00000000000007D2 Not Null
              3               0x00000000000007D3 Not Null

              TestRowversion2 RowVersion         NewNullColumn
              --------------- ------------------ --------------------
              10              0x00000000000007D4 NULL
              20              0x00000000000007D5 NULL
              30              0x00000000000007D6 NULL


              DBTS               MIN_ACTIVE_ROWVERSION
              ------------------ ---------------------
              0x00000000000007D6 0x00000000000007D7


              Note that, from a RowVersion standpoint, the values have not changed.

              So, if you are using this value in some form of ETL, this is something you will need to be aware of, particularly for columns that are declared as NOT NULL. You may need to tweak the rowversion values, using a query such as:      
                    
              UPDATE dbo.TestRowversion1
              SET    NewNotNullColumn = NewNotNullColumn;            
                    
              Note that if your software keeps the lastrowversion per table, and not at the full database level, you might just set that value back to the start of time 0x0000000000000000, which will perform a lot better! In fact, for a very large table, you may need to do a "chunked" update, just updating rows where the rowversion value is still NULL.

              Checking the data:     

              SELECT *
              FROM    dbo.TestRowversion1;      
                     
              You will see the rowversion values have now been incremented:

              TestRowversion1 RowVersion         NewNotNullColumn
              --------------- ------------------ --------------------
              1               0x00000000000007D7 Not Null
              2               0x00000000000007D8 Not Null
              3               0x00000000000007D9 Not Null

              Now, let's drop the new column, and see if there is any change:     
               
              ALTER TABLE dbo.TestRowversion1
                  DROP DFLTTestRowversion1_NewNotNullColumn;

              ALTER TABLE dbo.TestRowversion1
                   DROP COLUMN NewNotNullColumn;      
                     
              Now, check the table:      
               
              SELECT *
              FROM   dbo.TestRowversion1;       
                     
              And you will see that the rowversions still match:

              TestRowversion1 RowVersion
              --------------- ------------------
              1               0x00000000000007D7
              2               0x00000000000007D8
              3               0x00000000000007D9

              This is, from a performance standpoint, expectable. You would not expect that they would want to change every single row in the table when adding or deleting a column. Especially for a nullable column I was not surprised, that the rowversion stayed the same. It is however, just something you need to realize when using rowversions for ETL (it would be the same if you created your own time based ETL datetime value as well).

              Finally, what about when you add the RowVersion column to the table? This one is a bit more obvious than the previous case, since it obviously needs to grab a value to add it, but it never hurts to check it out.      
                    
              SELECT @@DBTS;      
                    
              INSERT INTO dbo.TestNoRowversionYet(TestNoRowversionYet)
              VALUES(1),(2),(3);     

              SELECT @@DBTS;      

              Which returns the same value twice, since there is no change:

                   
              ------------------
              0x00000000000007D9

              ------------------
              0x00000000000007D9

              Now we add the rowversion:     
               
              ALTER TABLE dbo.TestNoRowversionYet
                  ADD RowVersion rowversion NOT NULL; --you can declare NULL, but it still behaves the same      

              And it has changed for the database, and the columns have rowversion values:

              SELECT @@DBTS;

              SELECT *
              FROM   dbo.TestNoRowversionYet;     

               

              ------------------
              0x00000000000007DC

              TestNoRowversionYet RowVersion
              ------------------- ------------------
              1                   0x00000000000007DA
              2                   0x00000000000007DB
              3                   0x00000000000007DC

              Using rowversion is a very easy way to build a fast, and reliable method of change detection, but it is not without its caveats, which could bite you hard if you do not realize what is occurring.      

            • Doing a book signing at the DataCore booth at Microsoft Ignite!

              If you are in the intersection of these two groups:

              • Doesn't currently have a copy of my current database design book, or wants to give one to a friend who architects poor databases
              • Will be attending the Microsoft Ignite Conference

              Then I have a great deal for you. The middle four days of the Microsoft's Ignite conference in Orlando, I will be signing 300 books at the DataCore booth #2213, where they are launching a product called MaxParallel for SQL Server.

              There is no requirement to purchase anything from DataCore to get your free, signed copy of my book (or even talk to them at all, if you don't want to!) Just line up, say hi and I will put marks in the books that will reduce it from mint to near mint condition. Obviously, they hope you hang around and check out the product, and at least let them scan your badge to send you a bit of information. They are nice people and their product is very interesting, so I hope you do.

              You can read more about MaxParallel here, and I am certain that you are going to be as skeptical as I was. After I had the demo from them that will be similar to what they do on the show floor, I have personally tried it in a lab setting (Azure VM, 4 CPU, with 8 spinning disks using HammerDB to do a TPC test,) and I saw rather promising results. I know they stand behind the product, because you can try it now on Azure Marketplace yourself if interested for 30 days without spending a dime, and will also be able to try it on your on-premises edition as well before buying. (I know I personally don't like trying an iPhone app for 3 bucks if I can't see if it works as expected!)

              The book they are giving away is my latest: Pro SQL Server Relational Database Design and Implementation and I will be at their booth at least for the following times:

              Monday: 1 PM -3 PM
              Tuesday: 2 PM- 4 PM
              Wednesday: 2 PM- 4 PM
              Thursday: 10 AM to 12 PM

              Why my book? The topic of how to build and implement a database properly goes hand in hand with performance tuning. The better you build your database, the more data you can work with in a smaller amount of time with fewer resources. Once your database is built, properly or not, and code has been written, changing the code is hard, and changing the design is exponentially harder. That is where the iterative nature of SQL Server comes in, getting better almost yearly now. Add to that hardware increases, and a database platform optimization tool like MaxParallel, and you can help bad code run faster, and great code run fastest.

              Hope to see you there!

              Disclaimers

              Some of my expenses are being reimbursed by DataCore (conference entrance and lodging), but I am not being reimbursed for my time or other travel expenses. The books they are giving away were purchased directly from my publisher. Book inscription is not guaranteed to be legible or even make sense until you read the book.

            • Temporal Tables - Part 6 - Start and End Times in UTC Time Zone

              In my first 5 blogs on Temporal, I failed to note something pretty important about their usage. The start and end times for the row (and it follows, the historical rows) are likely not in your local time zone. They are stored in UTC time zone. This is obviously good because of that pesky daylight saving time issue where 1:30 AM occurs twice each year (here in the US, the time occurs on the first Sunday of November).

              Unless you live in London, England or someplace where the offset from UTC is 0, it can make working with these tables confusing, because most people rarely work in UTC time, and even rarer is to think in UTC time when most of your data is likely in your local time zone. So you write your query and use local time...and then, most likely, data is returned…but not necessarily the data you actually desired.

              The place you will probably notice the issue is right after you create a table, especially if your server is in a time zone that has a negative offset from UTC. What will happen is that you insert a row, and immediately try to fetch a row using the FOR SYSTEM_TIME AS OF the current time in your query, and nothing will be returned. What you will have done is ask for rows that existed before your table even existed. For example, consider the following structure (basically the same table structure used in part 5):

              CREATE DATABASE TemporalExample
              GO
              USE TemporalExample
              GO
              CREATE SCHEMA Sales;
              GO
              CREATE SCHEMA SalesHistory; --Unlike earlier examples, I put the history in its own schema for granting security purposes.
              GO
              CREATE SEQUENCE Sales.SalesOrderData_SEQUENCE MINVALUE 1 START WITH 1;
              GO
              CREATE TABLE Sales.SalesOrder

                   SalesOrderId int NOT NULL CONSTRAINT PKSalesOrder PRIMARY KEY,
                                            --default to a text hex value, so we can see changes...
                  Data varchar(30) NOT NULL DEFAULT (SUBSTRING(CONVERT(varchar(20),
                                   CAST(NEXT VALUE FOR Sales.SalesOrderData_SEQUENCE as varbinary(5)), 1),3,12)),
                  ValidStartTime datetime2 (0) GENERATED ALWAYS AS ROW START, --(0) to keep the output short. Typically (7)
                  ValidEndTime datetime2 (0) GENERATED ALWAYS AS ROW END,
                  PERIOD FOR SYSTEM_TIME (ValidStartTime, ValidEndTime)

              WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = SalesHistory.SalesOrder));

              Now, I will create a couple of new rows. Nothing too interesting, just needs to have a start and end time, and 2 rows seems cleaner than 1:

              INSERT INTO Sales.SalesOrder(SalesOrderId, Data)
              VALUES (1, DEFAULT), (2, DEFAULT);
              GO

              Check the rows of the table:

              SELECT *
              FROM Sales.SalesOrder;

              This returns data just as expected, but I am not writing this blog after midnight in the Eastern Time Zone, rather it was just after 8:00 PM:

              SalesOrderId Data                           ValidStartTime              ValidEndTime
              ------------ ------------------------------ --------------------------- ---------------------------
              1            0000000001                     2017-09-18 00:07:25         9999-12-31 23:59:59
              2            0000000002                     2017-09-18 00:07:25         9999-12-31 23:59:59

              So if I write my query to get the data as of now, as I did when I was building my new temporal table, using SYSDATETIME();

              DECLARE @AsOf datetime2(0) = SYSDATETIME(); --This time will be just after 8:00 PM unless I waited 4+ hours.

              SELECT *
              FROM  Sales.SalesOrder FOR SYSTEM_TIME AS OF @AsOf;
              GO

              Nothing at all:

              SalesOrderId Data                           ValidStartTime              ValidEndTime
              ------------ ------------------------------ --------------------------- ---------------------------

              Change SYSDATETIME() to SYSUTCDATETIME(), and you will get back the same data as you will without the FOR SYSTEM_TIME AS OF. Of course, it is a VERY unlikely usage of the FOR SYSTEM_TIME clause to only get back the current data in the table when you can just not have the clause, unless you are building a tool and you do something like:

              SET @asOfParameterValue = ISNULL(@asOfParameterValue,SYSUTCDATETIME())

              As most people are going to want to work in their specific time zone, we can change our queries to convert the time zone on the parameter.  So we can change the data using the AT TIME ZONE function as I covered in my previous blog on that subject:

              DECLARE @TimeLocal datetime2(0) = SYSDATETIME(); --AT TIME ZONE will not work with a literal
              DECLARE @AsOf datetime2(0) = @TimeLocal AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'UTC';

              SELECT SalesOrder.SalesOrderId, SalesOrder.Data,
                     SalesOrder.ValidStartTime AT TIME ZONE 'UTC' AT TIME ZONE 'Eastern Standard Time' AS ValidStartTime,
                     SalesOrder.ValidEndTime AT TIME ZONE 'UTC' AT TIME ZONE 'Eastern Standard Time' AS ValidStartTime
              FROM  Sales.SalesOrder FOR SYSTEM_TIME AS OF @AsOf;
              GO

              This returns:

              SalesOrderId Data                           ValidStartTime                     ValidStartTime
              ------------ ------------------------------ ---------------------------------- ----------------------------------
              1            0000000001                     2017-09-17 20:07:25 -04:00         9999-12-31 18:59:59 -05:00
              2            0000000002                     2017-09-17 20:07:25 -04:00         9999-12-31 18:59:59 -05:00

              Then you can enter any time in place of SYSDATETIME() in Eastern Standard Time, and it will work as desired. Note that the end time actually ends up in a different time zone offset than the start time. It is an impossibly large time in any case.

              If you are building an application that needs to run in any time zone, you could change this to parameterize the time zones:

              DECLARE @TimeZone sysname = 'Eastern Standard Time'; --This could even be a parameter to a stored procedure, or value in a table
              DECLARE @TimeLocal datetime2(0) = SYSDATETIME() --AT TIME ZONE will not work with a literal
              DECLARE @AsOf datetime2(0) = @TimeLocal AT TIME ZONE @TimeZone AT TIME ZONE 'UTC';

              SELECT SalesOrder.SalesOrderId, SalesOrder.Data,
                     SalesOrder.ValidStartTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime,
                     SalesOrder.ValidEndTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime
              FROM  Sales.SalesOrder FOR SYSTEM_TIME AS OF @AsOf;
              GO

              This works great, and interestingly, if you are working with a timezone that is + hours, it also works. The end time is always 9999-12-31 23:59:59 (plus a fractional part corresponding the precision you set on the datetime2 datatype of the start and end times). Removing the SYSTEM_TIME AS OF, so we get back data (note I also tested this with datetime2(7) datatypes for the start and end times):

              DECLARE @TimeZone sysname = 'Romance Standard Time'; --This could even be a parameter to a stored procedure

              SELECT SalesOrder.SalesOrderId, SalesOrder.Data,
                    SalesOrder.ValidStartTime,
                     SalesOrder.ValidStartTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime,
                     SalesOrder.ValidEndTime,
                     SalesOrder.ValidEndTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime
              FROM  Sales.SalesOrder;
              GO

              This returns:

              SalesOrderId Data        ValidStartTime       ValidStartTime              ValidEndTime         ValidStartTime
              ------------ ----------- -------------------- --------------------------- -------------------- ---------------------------
              1            0000000001  2017-09-18 00:07:25  2017-09-18 02:07:25 +02:00  9999-12-31 23:59:59  9999-12-31 23:59:59 +00:00
              2            0000000002  2017-09-18 00:07:25  2017-09-18 02:07:25 +02:00  9999-12-31 23:59:59  9999-12-31 23:59:59 +00:00

              The end time has no offset. So it is technically the same time as it was before the AT TIME ZONE functions were applied, while the start time works fine.

            • Using AT TIME ZONE to manipulate and convert time zones

              I have a series of blogs about temporal tables in SQL Server 2016, which you can see here. However, one thing I never had mentioned was that the Temporal Row Start and End Times are always stored in UTC times (which is generally inconvenient to work with unless you are from the Great Britain area).  However, when you are looking at your data and want to see what the data looked like at 'Yesterday at 12:00 at the computer you are working from, this can be a bit more work.

              In 2016 SQL Server, a new feature was added that totally passed me by. AT TIME ZONE. It is used to manipulate the time zone offset of a point in time value,either to append the time zone to a value, or to shift the time zone value of a DATETIMEOFFSET value. The most common practical use that I will show is converting a value in your local time to UTC (and vice versa).

              Appending the time zone offset to a time value

              The first use of AT TIME ZONE doesn't seem overly interesting for normal day to day use on the face of things. It adds time zone information to a variable. For example, consider the following point in time value:

              SELECT  SYSDATETIME() AS CurrentTime;

              CurrentTime
              ---------------------------
              2017-09-01 20:40:58.5931246

              Now, if you want to state that the time zone offset of a time value is for a given time zone, you can use AT TIME ZONE to append the offset (with the output type in this case being DATETIMEOFFSET(7)):

              SELECT  SYSDATETIME() AT TIME ZONE 'Central Standard Time' AS CentralTime, 
              SELECT  SYSDATETIME() AT TIME ZONE 'Eastern Standard Time' AS EasternTime; 

              CentralTime                        EasternTime
              ---------------------------------- ----------------------------------

              2017-09-01 20:41:28.9633676 -05:00
              2017-09-01 20:41:28.9893637 -04:00

              The time zone can be not only be a literal, but can be an expression, so the following will work too:

              DECLARE @TimeZone1 NVARCHAR(200) = 'Central Standard Time'
              SELECT  SYSDATETIME() AT TIME ZONE @TimeZone1 ;
               

              This in and of itself is pretty cool. However, most of the time, I don't see many people using columns of DATETIMEOFFSET, just one of the typical point in time types like DATETIME2. Just adding the time zone is cool, but when you cast the values to a DATETIME2, the time zone is lost:

              DECLARE @timeValue datetime2 = SYSDATETIME(); --fetch a point in time

                     --cast the time in the Central Time Zone to datetime2 
              SELECT CAST(@timeValue AT TIME ZONE 'Central Standard Time' AS datetime2) AS CurrentTime,

                     --cast the time in the Pacific Time Zone to datetime2
                     CAST(@timeValue AT TIME ZONE 'Pacific Standard Time' AS datetime2) AS StillCurrentTime,

                     --compare the two
                     CASE WHEN CAST(@timeValue AT TIME ZONE 'Central Standard Time' AS datetime2) =
                             CAST(@timeValue AT TIME ZONE 'Pacific Standard Time' AS datetime2) THEN 'Match'

                     ELSE 'No Match' END AS Match;

              Which returns:

              CurrentTime                 StillCurrentTime            Match
              --------------------------- --------------------------- --------
              2017-09-01 21:16:20.1161830 2017-09-01 21:16:20.1161830 Match

              The offset times before you cast to the datetime2 would not match, if you compared them. For example:

              DECLARE @datetimeoffset1 datetimeoffset = GETDATE() AT TIME ZONE 'Eastern Standard Time'
              DECLARE @datetimeoffset2 datetimeoffset = GETDATE() AT TIME ZONE 'Central Standard Time'

              SELECT CASE WHEN @datetimeoffset1 < @datetimeoffset2 THEN 1 ELSE 0 end

              will return 1, because the same clock time in the Eastern Time Zone is earlier than the same clock time in the Central Time Zone.

              Converting a time from one time zone to another

              The second, very practical, thing the feature does is to change the time zone of a value that already has a time zone offset. So consider the time:

              DECLARE @datetimeoffsetValue datetimeoffset(7) =
                              '2017-09-01 20:41:28.9633676 -05:00'

              This is one of the values used earlier, and this offset corresponds to the offset in the Central Time Zone.  To convert this to the Eastern Time Zone, you can use AT TIME ZONE again:

              SELECT @datetimeoffsetValue AT TIME ZONE 'Eastern Standard Time'

              ----------------------------------
              2017-09-01 21:41:28.9633676 -04:00

              Which is one clock hour later in the Eastern Time Zone (and the corresponding offset is one hour less too). Casting the values to datetime2 value, it will strip off the time zone offset, and then it would look like 1 hour different.

              As I was doing some research trying to find examples of AT TIME ZONE, I found the following thread on Stack Overflow: (https://stackoverflow.com/questions/36393742/using-at-time-zone-to-get-current-time-in-specified-time-zone )

              Since the value in the Eastern Time Zone is a DATETIMEOFFSET value, you can simply use another AT TIME ZONE clause to convert the time zone to a different time zone. For example, at 10:00AM in the Eastern Time Zone, what time is it in the Pacific? (note that you cannot use AT TIME ZONE on a string literal… )

              SELECT CAST('2017-01-01 10:00' AS datetime2) AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'Pacific Standard Time'

              It is 3 hours earlier:

              ----------------------------------
              2017-01-01 07:00:00.0000000 -08:00

              What makes this a very practical feature that many programmers need is to translate a time from their local time zone to the UTC time zone. For example:

              DECLARE @TimeInUTC datetime2 = '20170101 12:00'
              SELECT @TimeInUTC, @TimeInUTC AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'UTC'

              This returns:

              --------------------------- ----------------------------------
              2017-01-01 12:00:00.0000000 2017-01-01 17:00:00.0000000 +00:00

              Because the value can be an expression, consider that you could build the following table, storing the time zone by name (Or perhaps calculating it by location the time zone the data based on the geography of the customer):

              CREATE TABLE dbo.CustomerActivity
              (
                  CustomerId    int CONSTRAINT PKCustomerActivity PRIMARY KEY,
                  ActivityUtcTime datetime2,
                  TimeZoneName sysname
              );
              GO
              INSERT INTO dbo.CustomerActivity(CustomerId, ActivityUtcTime,TimeZoneName)
              VALUES(1,SYSUTCDATETIME(),'Eastern Standard Time'),(2,SYSUTCDATETIME(),'Pacific Standard Time'),
                    (3,SYSUTCDATETIME(),'UTC'),(4,SYSUTCDATETIME(),'AUS Eastern Standard Time'),
                    (5,SYSUTCDATETIME(),'Not A Time Zone'); --To force a failure

              Query the data, casting the data first to DATETIMEOFFSET (which is equivalent in this case to AT TIME ZONE 'UTC'):

              SELECT CustomerActivity.CustomerId,
                     CAST(CAST(ActivityUtcTime AS datetimeoffset(7)) AT TIME ZONE TimeZoneName AS datetime2(7)) AS ActivityTimeLocal,
                     CustomerActivity.TimeZoneName
              FROM   dbo.CustomerActivity;

              Then you can get back the times in the local time for the customer when you need it:

              CustomerId  ActivityTimeLocal           TimeZoneName
              ----------- --------------------------- ------------------------------
              1           2017-09-01 22:12:07.1144281 Eastern Standard Time
              2           2017-09-01 19:12:07.1144281 Pacific Standard Time
              3           2017-09-02 02:12:07.1144281 UTC
              4           2017-09-02 12:12:07.1144281 AUS Eastern Standard Time

              This lets you store the times as UTC, which is the typical desired way to store data when dealing with localities, particularly events where the overlapping time during the "Fall Back" part of Daylight Saving Time would be inconvenient. Alternatively, you might store data in your local time zone and convert not from UTC, but from the literal local value, but to the variable based customer's time zone.

              Note that my row #5 will fail because of an invalid time zone:

              Msg 9820, Level 16, State 1, Line 52
              The time zone parameter 'Not A Time Zone' provided to AT TIME ZONE clause is invalid.

              You can see the valid ones IN the following table, which will return 135 rows...

              SELECT * FROM sys.time_zone_info

              One thing that is really interesting is that most of the time zones are suffixed "Standard Time", though it is the same during Daylight Saving Time, which is generally part of the name, as in "Eastern Daylight Saving Time", rather than just "Eastern Time."

            • Row Level Security and Indexed Views

              I was giving my Row Level Security session last weekend for the Richmond User Group (@RichmondSQL http://rva.pass.org/) and a question came up (ok, so I technically asked the question). How does the feature work with indexed views?

              Generally speaking, the answer turned out to be obvious. You can apply a RLS Security Policy to a view that has an index on it, but not to a table that is referenced by an indexed view.

              Example code:

              Create a table

              CREATE SCHEMA Demo;
              GO
              CREATE TABLE Demo.SaleItem
              (
                  SaleItemId    int CONSTRAINT PKSaleIitem PRIMARY KEY,
                  ManagedByRole nvarchar(15), --more typically would be sysname, but nvarchar(15) is easier to format for testing
                  SaleItemType  varchar(10)
              )
              GO


              And a very simple predicate function

              CREATE OR ALTER FUNCTION rowLevelSecurity.ManagedByRole$SecurityPredicate_RLSView (@ManagedByRole AS sysname)
                  RETURNS TABLE
              WITH SCHEMABINDING --if schemabound, users needn't have rights to the function
              AS
                  RETURN (SELECT 1 AS ManagedByRole$SecurityPredicate) ; --Works no matter what, for simplicity sake
              GO


              Next, create a view. Must be schemabound for RLS, and for an indexed view.

              CREATE OR ALTER VIEW Demo.SaleItem_RLSView
              WITH SCHEMABINDING
              AS
                  SELECT SaleItemId, ManagedByRole, SaleItemType
                  FROM   Demo.SaleItem
              WITH CHECK OPTION;
              GO

              Then add an index to the view:

              CREATE UNIQUE CLUSTERED INDEX IndexedView ON Demo.SaleItem_RLSView (SaleItemId);

              Now, attempt to add to table will fail:

              CREATE SECURITY POLICY rowLevelSecurity.Demo_SaleItem_SecurityPolicy
                  ADD FILTER PREDICATE rowLevelSecurity.ManagedByRole$SecurityPredicate_RLSView(ManagedByRole)
                          ON Demo.SaleItem
                  WITH (STATE = ON); --go ahead and make it apply
              GO

              Msg 33265, Level 16, State 1, Line 42
              The security policy 'rowLevelSecurity.Demo_SaleItem_SecurityPolicy' cannot have a predicate on table 'Demo.SaleItem' because this table is referenced by the indexed view 'Demo.SaleItem_RLSView'.


              But you can still add to the view:

              CREATE SECURITY POLICY rowLevelSecurity.Demo_SaleItem_SecurityPolicy
                  ADD FILTER PREDICATE rowLevelSecurity.ManagedByRole$SecurityPredicate(ManagedByRole)
                           ON Demo.SaleItem_RLSView
                  WITH (STATE = ON); --go ahead and make it apply
              GO

              Now the filter predicate will be applied to usage of the view, exactly like it is for a table (you cannot have a BLOCK predicates on any kind of view). This and a few more tweaks have been made to the Latest Version of the presentation code which can be accessed from the Dropbox folder link you can find on my webpage: http://www.drsql.org/presentations

            • Scenic City Summit is this Friday

              Scenic City Summit is a one day conference in Chattanooga, the Scenic City Summit is a new experience for me, although in a familiar, Devlink-esqe setting of the Chattanooga Convention Center in Chattanooga, TN.

              I am looking forward to attending, and seeing a couple of fellow Microsoft MVP's do keynotes: Cory House and David Neal. I have never seen/met Cory House, and as he is doing the opening keynote, there is only a 30% chance I will actually make it there. David Neal on the other hand, I have seen a few times at Music City Code and Devlink, and he and I also share a devotion to the cullinary masterpiece known as Nashville Hot Chicken (my taste bud just twanged a bit thinking about it).

              But I digress.

              Scenic City Summit isn't a large conference, and is even smaller when it comes to database content. Under the topic of database, there are 2 sessions: One by Eric Cobb, (he is from Nashville, so I may know him already) called Building Better SQL Server Databases, which I plan to attend.

              Then my session:

              Implementing Row Level Security in SQL Server

              Since the early days of SQL Server, it has been possible to use basic security measures to allow (and disallow) user access to any object in a database, and even limit access to a given column. A more difficult task has been to limit access to the individual rows in a table. This has made application development more complex because whereas most every user may have access to a table like HR.Employee, not every employee will have access to every employee's data. In this session I will cover several techniques to limit access to rows, from methods that have existed since early versions of SQL Server, to the new Row Level Security feature in SQL Server 2016.

              What is interesting about this session for me is the time frame. Some conferences you get an hour and 15 minutes and have to stretch your topic to fit (or to be fair, not hurry to get to the end because you have too much material to fill the typical hour. This presentation will be the other way. It is scheduled for 45 minutes. And since these will be more of a coder/developers type of audience, I really have no idea whether they will want to see 2016 topics, or how to do it with pre-2015 technologies. So I will have my 2 hours of material (all code in SSMS) ready, and see where things go.

              There are a few other sessions I am keen to attend, both from Gaines Kergosien (great guy who runs Music City Code), the first of which conflicts with Eric's session, but how can you not want to see: a session on "7 Habits of Highly Paid Developers" I work for a non-profit, but I am not dead inside. The one I do plan to make it to is called "From Developer to Data Scientist" as I am interested in the whole data scientist path, if not necessarily to become one, to build better databases that support them.

            • 50ths… SQL Saturday and Life

              This week I will experience a couple of events associated to the number 50.

              SQL Saturday Atlanta

              The second will be my 50th SQL Saturday Event! (49th speaking, 1 attended in casual mode… 50th speaking Saturday will be in August at SQL Saturday Louisville).  This weekend is SQL Saturday Atlanta. It is one of my favorite events, not just for the great people I have known for over 10 years, not just for the mid major conference feel, not even because of the awesome theming.  Well maybe it is the theming, because I can say the rest of those about all of the SQL Saturdays I have attended!

              And awesomely, for my 50th event I will be doing my good old standard database design session:

              Database Design Fundamentals

              Data should be easy to work with in SQL Server if the database has been organized as close as possible to the standards of normalization that have been proven for many years, but are often thought of as old-fashioned. Many common T-SQL programming "difficulties" are the result of struggling against these standards and can be avoided by understanding the requirements, applying normalization, as well as a healthy dose of simple common sense. In this session I will give an overview of how to design a relational database, allowing you to work with the data structures instead of against them. This will let you use SQL naturally, enabling the query engine internals to optimize your output needs without you needing to spend a lot of time thinking about it. This will mean less time trying to figure out why SUBSTRING(column,3,1) = 'A' is killing your performance, and more time for solving the next customer problem.

              This is by far the session I have done the most times through the years at SQL Saturday and PASS Summits, and it is the topic that got me into the whole speaking, writing, and SQL for a hobby/living situation I have lived for the past 20 years.

              Life

              But before I reach 50 SQL Saturdays, I have another milestone to scoot past. 50 years of life as of July 12, 2017. I have whined written about my travails over the past 20 years that make me feel old: gray hair, grandkids, hip replacements, knee problems, just to name a few. Reaching the decade when my father passed away in his life is mildly scary to me also (even if he died of an illness). Don't get me wrong, not afraid to pass away some day, but I am not looking forward to it either. (If I was, I wouldn't be consistently going to the gym for 50 minutes a day, and I would eat a lot more fried chicken from Chef Big Shakes!)

              While I certainly do not feel like I did when I was 20, I don't exactly feel old. Sure I can't stand up without making that middle aged groaning/grunting noise, but who can really? Sometimes I feel like just a big kid, and the SQL community is a big part of that, and the reasons are really twofold. First, in having a community of people who shares SQL Server with me, my world has stretched a lot since I first attended a conference and didn't leave the hotel where it was held. I have also learned that all of the seemingly "kiddie" stuff I like to do is not abnormal. Some of my favorite things to do are:

              • Legos - I can see 7 sets of Legos from my desk without turning my head (in which case there are several more sprinkled around my home office.
              • Sci-Fi/Fantasy - I love Star Wars (Movies, Comics, TV Shows), Marvel (Movies and TV Shows), Planet of The Apes (Going to the see the new trilogy on my birthday!), Lord of the Rings (Movies), to name just a few of my favorites
              • Kids movies - Way more than 1/2 of the movies I go to see are animated, and the same can be said of those I stream.
              • Theme Parks - My wife and I plan our vacations like two pre-teenagers would. Resort time? Nah, that is taking up ride time.
              • Roller Coasters - I have some challenges with them, but if I can ride it, I will ride the heck out of it. I currently hold the record for number of times to ride the Tennessee Tornado at Dollywood in a day (52 times).

              Every one of these are things that I have in common with some of the members of the SQL community, and often discuss them on Twitter with them. Then add on the stuff I do in my "hobby" SQL community, learning, writing, blogging, tech editing books, I don't exactly sleep 15 hours a day anymore.

              If I just had my father, mother, grandparents, and other (what were at the time) old people that I have known as my guide, I have no idea what I would do with my time. I love sitting down and watching copious amounts of TV, but something that makes watching TV better is that it is a break from doing way too much work. Looking back at my childhood, I have a much greater appreciation for just how hard my father had to work for a living compared to the amount of work I have to do as a data architect.

              What is kind of cool is that even those things that technically make me feel old help me feel young sometimes. Grandchildren makes you sound old, but kids are generally fun. My oldest likes roller coasters as much, if not more, than me. And while your own children getting older sounds bad, watching them grow up to be something awesome is worth getting older for. And hey, in 20-30 years she will get to become my parent, hopefully :)

              Future

              What does the future hold? Who knows? I still like working with SQL, and I love the SQL community. Travelling to 5+ events every year? Most likely if they will still have me, I will continue to put in to speak. When I am tired of that, I will probably still show up. I do at our local user group, even when the topic doesn't seem like it will be one I am initially keen on.

              Age is just a number, though as the saying goes, 50 is a big number. The only people who will tell you otherwise are either too young know or too old to be trusted…

              Note, this is the second in my trilogy of personal blogs, the next will come later this year. Technical stuff will resume when I am finished with some editing work that is kicking my rear end. I don't want people to think I am just getting old and reflective. Just some events kind of lead you to bring them up and discuss. The SQL Community has been a part of my life now for 20 years, and for the past 10 or so, it is not only growing, but becoming more tightly knit due to social media and blogs. I just want to say thanks :)

            • SQL Saturday Chattanooga is THIS WEEK!

              My goodness how time does fly. It seems like just a few months ago I was going to my hometown SQL Saturday in Nashville, and now here I go again to my NEW hometown SQL Saturday in Chattanooga, TN. Much like the Nashville event, I didn't really do anything in terms of organization of the event, but I will be speaking at the event this year (last year, Chattanooga was he event I just attended… I remember liking that feeling in the days leading up to the event, as a matter of fact.)

              I will be doing my Database Design Fundamentals session. Here is the abstract:

              Data should be easy to work with in SQL Server if the database has been organized as close as possible to the standards of normalization that have been proven for many years, but are often thought of as old-fashioned. Many common T-SQL programming "difficulties" are the result of struggling against these standards and can be avoided by understanding the requirements, applying normalization, as well as a healthy dose of simple common sense. In this session I will give an overview of how to design a relational database, allowing you to work with the data structures instead of against them. This will let you use SQL naturally, enabling the query engine internals to optimize your output needs without you needing to spend a lot of time thinking about it. This will mean less time trying to figure out why SUBSTRING(column,3,1) = 'A' is killing your performance, and more time for solving the next customer problem.

              This session is totally my "chestnut" session (the derivation of that expression is not what I thought it was!), as a variation of this session has been done by me for 15 years or so, and some of the concepts have been around since the mid eighties (when the music was REAL!) 

              Larry Ortega (@SpoChatt) and team puts on a great conference, I have enjoyed it every year, and he is the lead of our local user group as well.  I hope to see you there… Convinced? Click here: https://www.sqlsaturday.com/624/RegisterNow.aspx

              Still not convinced?

              How can you resist attending SQL Saturday #624 on 6/24? It will be like many 1000s of events before you can possibly do that again!

              If my session, and the 6/24 thing, isn't enough, check out the full schedule for, frankly, a heck of a group of speakers! http://www.sqlsaturday.com/624/Sessions/Schedule.aspx

              Still not convinced? Well then you are just too hard to please for me. Stay home and enjoy a rainy Saturday afternoon without SQL.

            • Father, Son, Daughter, and SQL

              I don't love Father's Day. Mostly because I spoke to my father for the last time on Father's day back in 1996. He wasn't feeling well, we exchanged a few pleasantries, and the words I remember were: "I don't feel well, let me give you to your mother." Better than the actual last word's that Marshall got: "that fungus thing is acting up again;" but not as good as the words just before those: "I love you" (don't get the reference? check here.)

              As a son I was by no means the worst, but still I was not the best. My father loved me, much in the same way that a drill instructor loves a new set of recruits. He wanted the best for me, and while his discipline was very (very) strong (yes, he hit me with belts and switches, and I more than deserved it,) the care part was there, wanting the best for me. He taught me to treat everyone equal. We tore down the fence of our neighbor who was the only non-Caucasian person in the neighborhood. I still remember the gist of his words, along with the geographic location of that belt utilization: "he has enough trouble just being different." Times have changed, but it is a lesson I still carry today, and not just based on the color of a person's ethnicity. Love your neighbor.

              He was actually who got me into SQL, indirectly. When I was in high school, he had me take a class that I insisted was just for girls (I think there may have been a few football players in there too,) typing. I was never going to type, much less for a living. Ha! (I also took home economics, which was a good thing too, though apparently it is now called: family and consumer sciences.)

              Then, he helped me get my first computer, a Commodore 64, with the most horrible off-brand TV for a monitor. We were relatively poor at the time, him being between careers (after losing his management position at a car dealer to the owner's son, he quit, took about a year off, and ended up with a great position with the USDA Forest Service, managing their fleet), and it was a great inspiration over the years, after my brain realized all that had occurred. My love of computers started there, though I was still working on an engineering degree, which I failed at. He had always questioned if that was what I should do, and to be realistic, my head was not where it should have been (you can guess where it was at.) It took me time to get back in school, and finally to work on a Computer Science degree. The rest is more than I am going to write about here :)

              Though we were never close in a buddy buddy manner, I feel his influence even today. We were similar teachers. If you don't want to learn, we are terrible. He was a master mechanic, but could never teach me a thing about cars. He was, by all accounts, a great driving instructor in his job, and won many awards for the job he did. I love teaching people about SQL, because everyone who attends my sessions does it because they love the subject.

              The other part of Father's Day I do like is being a father, though it is difficult too. Am/Was I a great father? I don't know. I never felt like a bad father (embarrassing at times for sure,) but never "great". I don't know how some of the SQL community does it. Speaking, writing, consulting on the road, I applaud you if you can balance things with being a parent. I remember sitting in many volleyball, basketball, and school musicals on my Palm Pilot writing/editing/planning my first book.

              My daughter was 10, 11, and/or 12 then. She is not now, as one of my grandkids is 8! If we measured SQL programmers by whether their children became nerds too, then I did fail big time. I never did a great job teaching her much of any skills, and she never really cared about technical things. But my son-in-law is currently working in IT, and my 8 year old granddaughter loves math.

              Happy Father's Day, and thanks to all of the dads out there…

            • Utility to temporarily drop FOREIGN KEY constraints on a set of tables

              I was working on a database a few weeks back (maybe longer, I am kind of busy with several projects right now), and I wanted to truncate a set of tables. There was one central table, and 8 tables related by a FOREIGN KEY constraint. The central table had 6 million rows, and a few of the other tables 20+ million. TRUNCATE is amazingly fast,and when I tried to just use DELETE, it took almost as long as it took me to load the tables. (Exaggeration? A little bit, but not as much as you might imagine, since I use SSIS to BULK LOAD these tables, and the source data is very simple.)

              I could just get rid of the constraints, but as a relational db lover, I love constraints. When enforced, they protect me. When not enforced (like when SSIS disables them for a BULK LOAD operation), they are still good documentation. So as any good nerd programmer type would, I started coding a tool to deal with the constraints for me. And as a decent blogger, as soon as it started to get interesting, I realized I could blog about it and upload the code to my website. This in the end makes the code better, because I have to test more, and I learn stuff from readers reminding me things (like a FK script needs to honor CASCADE and NOT FOR REPLICATION, oops.)

              I am not going to go over, or even paste, the code in this blog. What I will do is show the interface, and demonstrate how the code works. The interface for the tool is two stored procedures. The first is used to drop the foreign key constraints, storing the scripts for the constraints in a table that it creates named Utility.foreign_key$batch_drop_toRestore:

              utility.foreign_key$batch_drop
                  @table_schema sysname = '%', --lets you control what schema to drop FKs from
                  @table_name sysname = '%',  --lets you control the tables to drop FKs from
                  @add_to_history_flag BIT = 0, --by default the procedure creates a table to hold history, this parameter tells it to add to history, if you need to do things incrementally
                  @force_replace_status  VARCHAR(20) = 'AS_WAS' --Using the following values: ENABLED, UNTRUSTED, DISABLED, lets you force the status of the constraints, like to quickly turn on the constraint as UNTRUSTED

              Then, to recreate the foreign key constraints after you have done your business with the tables:

              utility.foreign_key$batch_recreate --if the utility.foreign_key$batch_drop_toRestore table exists, do what is in the table.

              Note, If your structures become out of sync with the data in utility.foreign_key$batch_drop_toRestore, you may have to manually apply scripts from the table and/or drop the table. The point of the tool  is to use to drop, do some action, and add the FKs back. The more time passes without applying the script, and things could get out of sync.

              To demonstrate their use, here are a few samples usages. The utility uses the code from the previous blog showing how to script a foreign key:  http://sqlblog.com/blogs/louis_davidson/archive/2017/05/24/utility-to-script-a-foreign-key-constraint.aspx and you can download the code for this utility here: https://www.dropbox.com/s/3m9lghtfrnhhxgh/Utility-ForeignKey%24BatchDropRecreate.sql?dl=0

              USE TestRebuildConstraints
              GO

              --Recreate the tables, so we have a known state that can be compared to a script 
              DROP TABLE IF EXISTS Demo.ChildTable;
              DROP TABLE IF EXISTS Demo.ParentTable;
              DROP TABLE IF EXISTS Demo.GrandParentTable;

              CREATE TABLE Demo.GrandParentTable
              (  
                  GrandParentTableId INT NOT NULL
                      CONSTRAINT PKGrandParentTable PRIMARY KEY
              );

              CREATE TABLE Demo.ParentTable
              (
                  ParentTableId INT NOT NULL
                      CONSTRAINT PKParentTable PRIMARY KEY,
                  GrandParentTableId INT NULL,
                  CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                      FOREIGN KEY (GrandParentTableId)
                      REFERENCES Demo.GrandParentTable (GrandParentTableId)
              );

              CREATE TABLE Demo.ChildTable
              (
                  ChildTableId INT NOT NULL
                      CONSTRAINT PKChildTable PRIMARY KEY,
                  ParentTableId INT NULL,
              );

              --an untrusted constraint
              ALTER TABLE Demo.ChildTable WITH NOCHECK
                ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
                     FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

              --adding a second constraint. Not typical (or a great idea) but good enough for this exercise
              --disabled constraint
              ALTER TABLE Demo.ChildTable WITH NOCHECK
                  ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
                      FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId)
                      ON DELETE CASCADE
                      ON UPDATE SET NULL;

              ALTER TABLE Demo.ChildTable
                  NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];
              GO

              This is the set of data to compare against in order to make sure that nothing has changed that we did not want to change.

              SELECT is_not_trusted, is_disabled, delete_referential_action_desc, update_referential_action_desc, name
              FROM sys.foreign_keys
              WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo'
              ORDER BY name;

              Which returns:

              is_not_trusted is_disabled delete_referential_action_desc update_referential_action_desc  name
              -------------- ----------- ------------------------------ ------------------------------- ------------------------------------------
              1              1           CASCADE                        SET_NULL                        ChildTable$ref$ParentTable_Disabled
              1              0           NO_ACTION                      NO_ACTION                       ChildTable$ref$ParentTable_NotTrusted
              0              0           NO_ACTION                      NO_ACTION                       ParentTable$ref$GrandParentTable_Enabled

              The basic goal of the procedure is something like the following. I want to run the following set of statements on these tables I have created:

              TRUNCATE TABLE Demo.ChildTable;
              TRUNCATE TABLE Demo.ParentTable;
              TRUNCATE TABLE Demo.GrandParentTable;

              Which will fail (even if the constraint is disabled!)

              Msg 4712, Level 16, State 1, Line 52
              Cannot truncate table 'Demo.ParentTable' because it is being referenced by a FOREIGN KEY constraint.
              GO

              Now, run the following utility to drop the constraints:

              EXEC utility.foreign_key$batch_drop @table_schema = 'Demo';
              GO

              This creates a table in the utility schema that holds the scripts and object names:

              SELECT *
              FROM   Utility.foreign_key$batch_drop_toRestore

              Now, the TRUNCATE statements will execute.

              TRUNCATE TABLE Demo.ChildTable;
              TRUNCATE TABLE Demo.ParentTable;
              TRUNCATE TABLE Demo.GrandParentTable;
              GO

              Command(s) completed successfully.

              Execute the batch recreate procedure and it will restore the constraints.

              EXEC utility.foreign_key$batch_recreate;
              GO

              Check the foreign key metadata, it should match what you got from the first query of sys.foreign_keys. Next, I will demonstrate two other cases. I will use a TRANSACTION so we can do multiple tests without resetting our structures. In this example, I will show removing constraints one table at a time:

              --just remove constraints from ChildTable
              EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ChildTable';

              --To add more foreign keys to the table, you have to specify the @add_to_history_flag
              EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ParentTable';

              This causes the following error:

              Msg 50000, Level 16, State 1, Procedure foreign_key$batch_drop, Line 32 [Batch Start Line 164]
              Parameter @add_to_history_flag set to only allow initialize case

              Adding the @add_to_history_flag = 1 parameter value, and it will work:

              --This works, adding an additional table
              EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ParentTable',
                  @add_to_history_flag = 1;

              --Now put the constraints back
              EXEC utility.foreign_key$batch_recreate;
              GO

              Finally, you can also change the constraint's enabled status using the @force_replace_status

              --the script that is saved off will be for enabled constraints.
              EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo',@force_replace_status = 'ENABLED';
              GO
              EXEC utility.foreign_key$batch_recreate;
              GO

              Now check the metadata, and you will see the constraints are all trusted:

              SELECT is_not_trusted, is_disabled, delete_referential_action_desc, update_referential_action_desc, name
              FROM sys.foreign_keys
              WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo'
              ORDER BY name;

              Which you should now see:

              is_not_trusted is_disabled delete_referential_action_desc update_referential_action_desc  name
              -------------- ----------- ------------------------------ ------------------------------- ------------------------------------------
              0              0           CASCADE                        SET_NULL                        ChildTable$ref$ParentTable_Disabled
              0              0           NO_ACTION                      NO_ACTION                       ChildTable$ref$ParentTable_NotTrusted
              0              0           NO_ACTION                      NO_ACTION                       ParentTable$ref$GrandParentTable_Enabled

            • Utility to script a FOREIGN KEY Constraint

              Note: Amended to include the word include in the first Note (I am a terrible editor of my own writing!) and to fix a bug in the code with a misplaced parenthesis
              Note: Amended to include cascading and NOT FOR REPLICATION.

              As noted in my previous post, I am in the middle of building a utility (for work, and for my next SQLBLOG post), that will help when you need to drop the foreign key constraints on a table, but then replace them without having to save off the script manually. In that post, I showed how to manually create a foreign key constraint in three possible states.

              Next, in order to create the utility to script the FOREIGN KEY constraints before dropping them, I need a way to script the constraint. In this post I present a user defined function that will generate a script that does the scripting of a foreign key constraint. The code is largely based on some code from Aaron Bertrand in this blog (with his permission naturally!) with a few edits to script constraints as enabled, disabled or untrusted if the source constraint was in that condition (or you can force the constraints to a certain way if you so desire as well.)

              In this blog entry, I present the code for this function. The code has some comments to illuminate most of what is going on, but I am not going to do too much coverage of the code. Just the code, and some test cases.

              utility.foreign_key$script (download the code here)

              CREATE DATABASE TestRebuildConstraints;
              GO
              USE TestRebuildConstraints
              GO

              IF DB_ID() = DB_ID('TestRebuildConstraints') --helps me not create stuff in master
                  EXEC('CREATE SCHEMA utility');
              GO

              CREATE OR ALTER FUNCTION utility.foreign_key$script(
                  @schema_name sysname,
                  @foreign_key_name sysname,
                  @constraint_status VARCHAR(20) = 'AS_WAS' --ENABLED, UNTRUSTED, DISABLED
                                                             --ANY OTHER VALUES RETURN NULL
              )
              --------------------------------------------------
              -- Use to script a foreign key constraint
              --
              -- 2017  Louis Davidson  drsql.org
              --   Thanks to Aaron Bertrand and John Paul Cook's code
              --------------------------------------------------
              RETURNS NVARCHAR(MAX)
              AS
              BEGIN
                  --based on code to gen list of FK constraints from this article by Aaron Bertrand
                  --
              https://www.mssqltips.com/sqlservertip/3347/drop-and-recreate-all-foreign-key-constraints-in-sql-server/

                  --and code from John Paul Cook:
                  --
              https://social.technet.microsoft.com/wiki/contents/articles/2958.script-to-create-all-foreign-keys.aspx

                  DECLARE @script NVARCHAR(MAX);

                  IF @constraint_status NOT IN ('AS_WAS','ENABLED','UNTRUSTED','DISABLED')
                      RETURN NULL;

                  SELECT @script
                      =  N'ALTER TABLE ' + QUOTENAME(cs.name) + '.' + QUOTENAME(ct.name) + CHAR(13) + CHAR(10) + '   '
                          --code added to set the constraint's status if it is not to be checked (and
                          --in the case of disabled, you create it not trusted and disable it
                        + CASE
                              WHEN(is_not_trusted = 1
                                   OR fk.is_disabled = 1
                                    OR @constraint_status IN ( 'UNTRUSTED', 'DISABLED' ))
                                  --not forcing it to be enabled
                                   AND @constraint_status <> 'ENABLED' THEN
                                   'WITH NOCHECK '
                              ELSE
                                   ''
                          END
                        + 'ADD CONSTRAINT ' + QUOTENAME(fk.name) + CHAR(13) + CHAR(10) +
                        '      FOREIGN KEY ('
                        + STUFF((SELECT   ',' + QUOTENAME(c.name)
                                  -- get all the columns in the constraint table
                                  FROM     sys.columns c
                                          INNER JOIN sys.foreign_key_columns fkc
                                               ON fkc.parent_column_id = c.column_id
                                                  AND fkc.parent_object_id = c.object_id
                                  WHERE    fkc.constraint_object_id = fk.object_id
                                  ORDER BY fkc.constraint_column_id
                      FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'),1,1,N'')
                         + ')' + CHAR(13) + CHAR(10) + '         REFERENCES ' + QUOTENAME(rs.name) + '.' + QUOTENAME(rt.name)
                        + '('
                         + STUFF((SELECT   ',' + QUOTENAME(c.name)
                                  -- get all the referenced columns
                                 FROM     sys.columns c
                                          INNER JOIN sys.foreign_key_columns fkc
                                              ON fkc.referenced_column_id = c.column_id
                                                 AND fkc.referenced_object_id = c.object_id
                                 WHERE    fkc.constraint_object_id = fk.object_id
                                 ORDER BY fkc.constraint_column_id
                      FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'),1,1, N'') + ')'
                       + CASE fk.update_referential_action
                              WHEN 1 THEN CHAR(13) + CHAR(10) + '         ON UPDATE CASCADE '
                              WHEN 2 THEN CHAR(13) + CHAR(10) + '         ON UPDATE SET NULL '
                               WHEN 3 THEN CHAR(13) + CHAR(10) + '         ON UPDATE SET DEFAULT '
                              ELSE '' --could also say "no action" which is the default
                         END
                        + CASE fk.delete_referential_action
                              WHEN 1 THEN CHAR(13) + CHAR(10) + '         ON DELETE CASCADE '
                              WHEN 2 THEN CHAR(13) + CHAR(10) + '         ON DELETE SET NULL '
                               WHEN 3 THEN CHAR(13) + CHAR(10) + '         ON DELETE SET DEFAULT '
                              ELSE '' --could also say "no action" which is the default
                          END
                        + CASE fk.is_not_for_replication
                              WHEN 1 THEN CHAR(13) + CHAR(10) + '         NOT FOR REPLICATION '
                              ELSE ''
                           END
                        + ';'
                        + CASE
                              WHEN(fk.is_disabled = 1 AND @constraint_status IN ( 'DISABLED', 'AS_WAS' ))
                                   OR @constraint_status = 'DISABLED'
                                   THEN CHAR(13) + CHAR(10)+  CHAR(13) + CHAR(10)+   'ALTER TABLE ' + QUOTENAME(cs.name) + '.'
                                        + QUOTENAME(ct.name) + CHAR(13) + CHAR(10)
                                         + '   NOCHECK CONSTRAINT ' + QUOTENAME(fk.name) + ';'
                               ELSE
                                  ''
                          END
                  FROM   sys.foreign_keys fk
                         INNER JOIN sys.tables rt
                              -- referenced table
                             ON fk.referenced_object_id = rt.object_id
                         INNER JOIN sys.schemas rs
                              ON rt.schema_id = rs.schema_id
                         INNER JOIN sys.tables ct
                             -- constraint table
                             ON fk.parent_object_id = ct.object_id
                         INNER JOIN sys.schemas cs
                             ON ct.schema_id = cs.schema_id
                  WHERE  OBJECT_SCHEMA_NAME(fk.object_id) = @schema_name
                         AND fk.name = @foreign_key_name;
                  RETURN @script;
              END;

              Now, to test the code, I will create a few tables:

              --To test, using these tables, I will create three tables (which will anchor the tests of the
              --drop and recreate utilities as well.
              CREATE SCHEMA Demo;
              GO
              CREATE TABLE Demo.GrandParentTable
              (  
                  GrandParentTableId INT NOT NULL
                      CONSTRAINT PKGrandParentTable PRIMARY KEY
              );

              CREATE TABLE Demo.ParentTable
              (
                  ParentTableId INT NOT NULL
                      CONSTRAINT PKParentTable PRIMARY KEY,
                  GrandParentTableId INT NULL,
                  CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                      FOREIGN KEY (GrandParentTableId)
                      REFERENCES Demo.GrandParentTable (GrandParentTableId)
              );

              CREATE TABLE Demo.ChildTable
              (
                  ChildTableId INT NOT NULL
                      CONSTRAINT PKChildTable PRIMARY KEY,
                  ParentTableId INT NULL,
              );

              --an untrusted constraint
              ALTER TABLE Demo.ChildTable WITH NOCHECK
                ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
                    FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

              --adding a second constraint. Not typical (or a great idea) but good enough for this exercise
              --disabled constraint
              ALTER TABLE Demo.ChildTable WITH NOCHECK
                  ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
                      FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

              ALTER TABLE Demo.ChildTable
                  NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];
              GO

              Now, check that the constraints are as expected:

              SELECT is_not_trusted, is_disabled, name
              FROM sys.foreign_keys
              WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

              is_not_trusted is_disabled name
              -------------- ----------- ----------------------------------------------
              0              0           ParentTable$ref$GrandParentTable_Enabled
              1              0           ChildTable$ref$ParentTable_NotTrusted
              1              1           ChildTable$ref$ParentTable_Disabled

              Next, I will test the constraints in several different was, sometimes using AS_WAS, and others forcing the different configurations:

              SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS Original;

              Original
              ------------------------------------------------------------------------
              ALTER TABLE [Demo].[ParentTable]
                 ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                    FOREIGN KEY ([GrandParentTableId])
                       REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);


              SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_NotTrusted','AS_WAS') AS Untrusted;

              Untrusted
              ------------------------------------------------------------------------
              ALTER TABLE [Demo].[ChildTable]
                 WITH NOCHECK ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
                    FOREIGN KEY ([ParentTableId])
                       REFERENCES [Demo].[ParentTable]([ParentTableId]);

              SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_Disabled','AS_WAS') AS Original;


              Original
              ------------------------------------------------------------------------
              ALTER TABLE [Demo].[ChildTable]
                 WITH NOCHECK ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
                    FOREIGN KEY ([ParentTableId])
                       REFERENCES [Demo].[ParentTable]([ParentTableId]);

              ALTER TABLE [Demo].[ChildTable]
                 NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];

              SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_Disabled','AS_WERS') AS Fails;

              Fails
              ------------------------------------------------------------------------
              NULL

              Untrusted
              ------------------------------------------------------------------------
              ALTER TABLE [Demo].[ParentTable]
                 WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                    FOREIGN KEY ([GrandParentTableId])
                       REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);

              SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','DISABLED') AS Disabled;

              Disabled
              ------------------------------------------------------------------------
              ALTER TABLE [Demo].[ParentTable]
                 WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                    FOREIGN KEY ([GrandParentTableId])
                       REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);

              ALTER TABLE [Demo].[ParentTable]
                 NOCHECK CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled];

              --Now we test cascade and not for replication

              --First UPDATE and DELETE cascading

              ALTER TABLE [Demo].[ParentTable]
                 DROP CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]

              ALTER TABLE [Demo].[ParentTable]
                 ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                    FOREIGN KEY ([GrandParentTableId])
                        REFERENCES [Demo].[GrandParentTable]([GrandParentTableId])
                        ON UPDATE CASCADE
                       ON DELETE SET DEFAULT;

              SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS CascadeTest;

              CascadeTest
              --------------------------------------------------------------------------------------
              ALTER TABLE [Demo].[ParentTable]
                 ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                    FOREIGN KEY ([GrandParentTableId])
                       REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]
                       ON UPDATE CASCADE
                        ON DELETE SET DEFAULT );


              --Next Add Not For Replication
              ALTER TABLE [Demo].[ParentTable]
                 DROP CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]

              ALTER TABLE [Demo].[ParentTable]
                  ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                     FOREIGN KEY ([GrandParentTableId])
                       REFERENCES [Demo].[GrandParentTable]([GrandParentTableId])
                       ON UPDATE CASCADE
                       ON DELETE SET DEFAULT
                       NOT FOR REPLICATION;

              SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS CascadeNotForRepTest;

              Note that NOT FOR REPLICATION makes the constraint not trusted

              CascadeNotForRepTest
              ------------------------------------------------------------------------------------------
              ALTER TABLE [Demo].[ParentTable]
                 WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
                    FOREIGN KEY ([GrandParentTableId])
                       REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]
                       ON UPDATE CASCADE
                       ON DELETE SET DEFAULT
                       NOT FOR REPLICATION );

              Finally, to make sure that the code does deal with composite key references (which I didn't really doubt, but hey, you really need to test this stuff right?

              CREATE TABLE Demo.MultiKey
              (
                  Column1 INT NOT NULL,
                  Column2 INT NOT NULL,
                  PRIMARY KEY (Column1, Column2)
              )
              CREATE TABLE Demo.MultiKeyRef
              (
                  Column1 INT NOT NULL,
                  Column2 INT NOT NULL,
                  FOREIGN KEY (Column1, Column2) REFERENCES Demo.MultiKey (Column1, Column2) --no name for key here
              )
                 
              First, need to look up the name to get the system generated name:

              SELECT is_not_trusted, is_disabled, name
              FROM sys.foreign_keys
              WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

              Now, execute the three different ways we can script:

              SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','AS_WAS') AS Original;
              SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','DISABLED') AS Disabled;
              SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','UNTRUSTED') AS Untrusted;

              Original
              ------------------------------------------------------------------
              ALTER TABLE [Demo].[MultiKeyRef]
                 ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
                    FOREIGN KEY ([Column1],[Column2])
                       REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

              Disabled
              ------------------------------------------------------------------
              ALTER TABLE [Demo].[MultiKeyRef]
                 WITH NOCHECK ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
                    FOREIGN KEY ([Column1],[Column2])
                       REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

              ALTER TABLE [Demo].[MultiKeyRef]
                 NOCHECK CONSTRAINT [FK__MultiKeyRef__31EC6D26];

              Untrusted
              ------------------------------------------------------------------
              ALTER TABLE [Demo].[MultiKeyRef]
                 WITH NOCHECK ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
                    FOREIGN KEY ([Column1],[Column2])
                       REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

              Hopefully this script can be of some use to you, I will use it in the next blog where I build the utility to drop and recreate FOREIGN KEY constraints.

              (Note: This works with memory optimized tables as well, since the syntax is the same)

            More Posts Next page »

            This Blog

            Syndication

            Links to my other sites

            Archives

            Privacy Statement