THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Kalen Delaney

  • Thank you, SQLSaturday Holland!

    The first event on my Fall tour is behind me now. I had a wonderful time at SQL Saturday Holland. A couple of great questions came up that should lead to blog posts soon!

    Slides and scripts can now be downloaded from my DB  Best blog site:

  • I'm Leaving on a Jet Plane

    I'm taking off tomorrow for Holland, UK, Denmark, Germany and Israel:

    You can read the post from DB Best, linked to above, or follow individual links to my public events from here:


    September 30: SQL Saturday Holland


    October 5: Getting the most out of the SQL Server Engine on Prem and in the Cloud

    Reading, UK
    This is a free, full day workshop on some of the new features of SQL Server 2016 and 2016. The workshop includes hands-on labs and breakfast and lunch will be provided.

    October 7: SQL Saturday Denmark


    October 19: Getting the most out of the SQL Server Engine on Prem and in the Cloud 

    Tel Aviv, Israel
    This is a free, full day workshop on some of the new features of SQL Server 2016 and 2016. The workshop includes hands-on labs and breakfast and lunch will be provided. 

  • Geek City: Changing How To Change Your Database Properties — ALTER DATABASE

    (This article was originally published at )

    Long ago (in SQL Server years) many metadata changes were implemented with special one-off stored procedures. For example, we had sp_addindex and sp_dropindex, as well as sp_addtype, sp_droptype, sp_addlogin and sp_droplogin. For changing certain database properties, we had sp_changedbowner and sp_dbcmptlevel, to name a few.

    Gradually, Microsoft has started replacing these procedures with the more generic DDL commands ADD, DROP and ALTER. This is both good news and bad news. It’s good because these commands are more efficient and more predictable. It’s bad because the stored procedures were made up of T-SQL code that we could read and actually learn things from! I learned a lot of what I know about SQL Server metadata in the early days by reading the definitions of the system stored procedures.

    Some of the changes sort of snuck up on me. I knew for example that sp_dboption had morphed into ALTER DATABASE, but I just discovered recently that you don’t even need to specify the database name. You can use the word ‘current’ to indicate the current database. This change was introduced in SQL Server 2012, but no one told me.  For example, we can run the following:


    However, this doesn’t apply to all options. And it’s not clear which ones.

    The general ALTER DATABASE documentation seems to imply ‘current’  applies to all options, except for ones that aren’t even listed on the page:

    ALTER Database


    Now, if you look in the docs for the page for the ALTER DATABASE file options, it doesn’t show using ‘current’ at all.

    However, the commands that I’ve tested, including the one to add a new filegroup, and then one to add two files to that filegroup, both seem to work using ‘current’.

    And if you look at the page for the ALTER DATABASE SET options, there is a note:

    alter CURRENT database

    This message admits that using ‘current’ doesn’t work for every option, but it specifically doesn’t tell us which ones. It basically says “Try it and see!”.

    So what about changing the database owner? That is a database property and is viewable in sys.databases. However, changing the owner uses a completely different command: ALTER AUTHORIZATION.

    Even though you might think that ALTER DATABASE is all you need to change any database property, it just isn’t so.

    And there’s more. Now in SQL Server 2016, you can change configuration options for a specific database. I’ll tell you about that feature next time.


    Have fun!


  • 24 Hours of PASS – Locking and Blocking Q&A

    Thanks to everyone who attended my 24HOP session on Wednesday, September 7. My favorite part is always the Q&A. I wish there was more time to do live Q&A, but because there’s not, I get to turn it into a blog post!

    My session was a snippet of what I will be talking about in my pre-con for the big PASS Summit at the end of next month.

    My full day session is called Locking, Blocking, Versions: Concurrency for Maximum Performance and you can read about it here.

    In my one-hour 24HOP session, I described the basic aspects of locking and then took at look at the main metadata view: sys.dm_tran_locks. This 45 minute session was just the tip of the iceberg on what I could tell you about locking and blocking and row versions. I could actually talk for a whole day on this topic! Which I am going to get to do, at the Summit, on October 25th.


    Here are the questions that came up during the session, and my answers to them.  (I have made a couple of grammar edits to some of the questions.)

    1. What does the M stand for in LCK_M_IS lock?

    When you look in the sys.dm_os_wait_stats view, to see see what processes are waiting for something, the column wait_type will show a string starting with LCK_M if the wait type is a lock of some sort. (There are LOTS of other reasons for waiting that are not because of locks.) We saw LCK_M_SCH_M, LCK_M_X, LCK_M_IS during my demos. All locks start with these characters, and I was told that the M just means ‘mode’. So the three I listed are : lock mode schema modification, lock mode exclusive and lock mode intent shared.

    2. Are there any downsides to using RCSI when it comes to locking/blocking/concurrency?

    I actually didn’t really talk about RCSI in this short session, but I will discuss it a LOT in the full day presentation. But since you asked, I mention two downsides. First, any use of snapshot isolation (without the new memory-optimized tables) is going to demand a lot more from your tempdb database, so you need to make sure that tempdb can handle the load. Your tempdb needs to be big enough to handle all the versions that it will need to hold, and it should be on a nice, fast, smart drive system. Second, if you are using RCSI to avoid using the NOLOCK hint (a very noble goal, btw) you need to be aware that if you already are using NOLOCK in your application code, the NOLOCK will take priority over RCSI. NOLOCK allows SQL Server to read dirty (uncommitted) data. RSCI allows SQL Server to read the older committed data. With NOLOCK and RSCI, SQL Server will continue to read the dirty data instead of reading the older committed data. SQL Server will use resources to store and manage the old committed data in the version store, but it won’t ever be used.

    3. What was the hint to allow parallelism with bcp?

    The lock hint that allows parallel BCP operations is the –h “TABLOCK” hint. You can get more information in this article: Controlling Locking Behavior for Bulk Import

    4. Will you be writing a follow-up book "Microsoft SQL Server 2016 Internals" with a Deep-Dive into Blocking, Locking, and Deadlocking?

    I will not be writing more of the big Internals books anytime soon. Microsoft Press is no longer publishing, and my experience with other publishers for big books like this has been less than optimal, to say the least. It takes almost 2 years to get such a book written, edited and published, and with a new version coming out every 2 years now, it just is not possible. I will be writing more of the shorter books, like the ones Red Gate publishes. I already have one on Locking and Blocking, which was written for SQL Server 2008. I also wrote one on In-memory OLTP which also looks at concurrency issues a lot, and how they are addressed with the new memory-optimized tables.

    5. Where can we get this presentation? Where are the scripts?

    I am attaching them to this blog post; the presentation PDF and the scripts are in a single ZIP file.

  • Geek City: My SQL Server 2016 RTM In-memory OLTP Whitepaper


    Finally, we have a download available.

    You can go to this page  and scroll down to the section  'Technical Resources'. (NOT the section on the left called White papers).


    Click on “SQL Server 2016 In-memory OLTP technical white paper” and you can open or save.


    You can use this direct link: 





    P.S. And don’t forget I’ll be talking about In-memory OLTP for a whole day in San Antonio on August 12!

  • Did You Know: Windows Fast Startup is not really a StartUp!


    So you might already know, but I didn’t know, until I learned it, of course.

    My first Windows 8 machine was my Surface Pro 3 and I LOVED the way it started up so FAST. Fast is good, right? I didn’t even bother to wonder WHY or HOW it was so fast. I just thought Moore’s Law was hard at work.

    But then I noticed something very strange after I started doing most of my SQL Server testing on my Surface. Sometimes cache didn’t seem to be cleared. Sometimes temp tables would mysteriously be there right after starting up. Memory consumption was way too high. What was going on?

    Then I found it. The reason Windows 8 (and now 10) can start up so fast is that they’re really not shutting down completely. There’s an explanation here: 

    One of the things it says is:

    Instead of saving everything, Windows 8 saves just the OS kernel in a file on the hard drive and then uses it to while booting up, speeding the whole processing considerably.

    And then there is an illustration that indicates that one of the things that gets started in a cold boot that doesn’t get started in this fast boot is services. And SQL Server is a service. So when I think I’m shutting down Windows, which includes shutting down SQL Server, I’m really not. The SQL Server service, with all the temp tables, plan cache, data cache and memory, is saved and then restored.

    Yeah, fast is good, but it’s not always what I want. If I’ve already started up and I really need to restart SQL Server, I can just restart the service from Management Studio. I even created a stored procedure to quickly tell me my approximate start time, so I can know if it’s been just a few minutes, or actually days since my SQL Server was last started:

    USE master
    CREATE PROC dbo.sp_starttime as
    SELECT create_date FROM sys.databases
    WHERE name = 'tempdb';

    Then I can just  execute sp_starttime to see when my last restart was.


    As an alternative to stopping and restarting the SQL Server Service, I could do a Windows Restart. That option, as opposed to Windows Shut down, will actually stop everything, including all services.

    At first, I was so shocked that Windows really wasn’t shutting down when I asked it to, I wanted a way to turn it OFF. So I found this article:

    It says to go to Power Options and Choose What the Power Button Does.  You’ll get a screen like this one:


    The instructions just say to uncheck the box next to Turn on fast startup, but I found I wasn’t able to do that; the checkbox was inaccessible (greyed out). I finally found a really tiny message up near the top, that I have pointed to with a red arrow, that I needed to click on to enable the checkboxes. Then there is a Save changes button at the bottom, which I didn’t capture in the screen above.

    I did that. And lived with no fast startup for a while. But then my Surface broke, and I had to get a new one (yes, it was covered under my service agreement.) But after setting up the new machine, which came with the fast startup as default, I realized I had missed it. So for now I’m leaving it on. I just remember to restart the SQL Server service before I start any testing.


    (Check out my schedule to find out where I’ll be soon!
    I’m giving precons in Sacramento in July and San Antonio in August!)


  • Geek City: The Naming of Things


    As a teacher, and also a student of languages, one issue that’s always fascinating to me is how people choose to shorten a compound, or multi-part, name.  This is very problematic when the word is from another language.  For example, we use the word latte in English as a short form of caffè latte, but we shorten it to just the part that means ‘milk’, not coffee! My daughter was an exchange student in Spain and told me another example. Her host mother kept asking her if she had a ‘body’. The question was in Spanish, except for the word ‘body’ and my daughter just couldn’t figure out what she was talking about. The woman was confused thinking for sure my daughter should understand since it was an English word! Finally, they figured out the woman was referring to a bodysuit, but shortening it to something that really had no meaning on its own. In English, we have the words grape and grapefruit. I had a friend (a native Chinese speaker) who could just never remember which was which and frequently mentioned grapes when she meant grapefruit! And sometimes she did it the other way around and asked her native-English speaking husband to get her a bag of grapefruits, when she just wanted grapes. These are two very different things! I have more examples, but I’m getting a bit off track here.

    In SQL Server, we also have concepts that have compound names that are frequently shortened. For ‘isolation level’, you could just refer to ‘isolation’ and people would probably know what you mean. But if you just said ‘level’, your meaning might not be so obvious.

    And what about pairs of terms with different qualifiers, like clustered indexes and nonclustered indexes? If someone just says index, you usually need to know which kind.

    And there are other terms which have the same base, and only one of them has a qualifier. And that’s really why I decided to write this blog post.  In SQL Server 2005, Microsoft introduced snapshot isolation for read operations. There were two different levels of snapshots. The terms we use are SNAPSHOT and READ COMMITTED SNAPSHOT. As a teacher, I always found this very confusing, as did many of my students. People would say SNAPSHOT, but it was not at all clear if they really meant READ COMMITTED SNAPSHOT. So I started using an additional qualifier, so that both terms would be qualified. I started saying TRANSACTIONAL SNAPSHOT, but that was too cumbersome, so then in class I just started calling it FULL SNAPSHOT. I just wanted both terms to have some qualifier, so if someone just said SNAPSHOT, I could ask which one they meant.

    Then in SQL Server 2014, Microsoft introduced In-Memory OLTP, with two new kinds of indexes. One is called NONCLUSTERED HASH, and the other is just plain old NONCLUSTERED. And again, I found this unbalanced. If someone just says NONCLUSTERED, am I sure they’re just not shortening NONCLUSTERED HASH inappropriately? So I started using the word RANGE as the qualifier for the non-HASH index. Then we could talk about NONCLUSTERED RANGE and NONCLUSTERED HASH, or just shorten the names to RANGE and HASH. And when I’m teaching or writing about these indexes, I am always careful to point out that the word RANGE is not part of the syntax, it’s just a descriptor I use.

    However, one of the engineers at Microsoft was not happy with the fact that my writings used the term RANGE index. He said it wasn’t part of the spec, so I shouldn’t use it. It would just confuse people. He claimed Microsoft never used that term and I shouldn’t either. I argued. I explained my pedagogical reasons. He finally just shrugged and decided that I probably wasn’t adding any additional obfuscation and grudgingly admitted I might actually be making things clearer. But he wasn’t happy about it. He just didn’t want me using the term RANGE.

    Now SQL Server 2016 is coming out. I’m doing research on some of the new storage structures and new metadata. And I’m looking at a DMV called sys.dm_db_xtp_memory_consumers with a column called memory_consumer_desc. And one of the possible values for this column is ‘Range index heap’. I just checked, and the view is also available in SQL Server 2014, but I just never got around to exploring it until now. And if I create a memory-optimized table called OrderDetails2 on a SQL Server 2014 instance, with three RANGE indexes, and then run this query:

    SELECT object_name(object_id) as object_name,
           index_id, memory_consumer_desc
    FROM sys.dm_db_xtp_memory_consumers;

    I will get these rows of output (among others):


    So RANGE INDEX is not just a term I chose to use to make things easier for my students and my readers. Someone else, designing DMVs, also thought it was a useful term to use.

    I am just about done upgrading my In-memory OLTP Internals whitepaper for SQL Server 2016 RTM. Once that’s published, you’ll be able to read all the details of the various memory consumers used for memory-optimized tables. Hopefully, you won’t have to wait too long!


  • Geek City: SQL Server 2016 CTP3 In-Memory OLTP Internals Whitepaper


    Just a quick note that my CTP3 paper has finally been published, and now I can start working on the final version!

    Here’s the link:



  • Did You Know: My First New Venture

    As I mentioned in my previous post, I am heading in new directions, and the first goal has now been reached. Rather than working with training companies to offer my SQL Server Internals training, I am now partnering with Denny Cherry and Associates to offer a special opportunity to those wanting to experience a multi-day training event with me.

    Starting next April, we will be offering a three-day event called SQL Server Internals for Tuning, where you can learn about those areas of SQL Server Internals that will help you diagnose and tune problematic queries.

    The first event is scheduled for April 4-6, 2016 in Orange County, CA.

    You can get details, including the outline for the three day course, and a link for registration, here:

    And after that? As we’re working hard to prepare for the April event, we’re also starting to think about where the next event will take place. Boston? Denver? Orlando?  Or some place farther afield like Oslo or Darmstadt?  

    Let us know what you think! And I hope this new venture gives me a chance to meet some of my readers that haven’t yet had a chance to attend any training events with me.


  • Did You Know: The Times They are A-Changin’


    You know, when Microsoft announces ‘deprecation’ of a feature, there is no guarantee that the feature will be gone any time ‘soon’. All that an official notice of deprecation means is that “This feature will be removed in a future version of Microsoft SQL Server.” But it could be a far future version. SQL Trace, for example, was announced as deprecated in SQL Server 2008, when Extended Events first made their appearance, but SQL Trace, and its trusty sidekick SQL Profiler, are still around, even in the upcoming SQL Server 2016.

    So you might not have believed me, in the various forums, or announcements from my training partners, when I said “This may be the last time I will be teaching my 5-day class in <insert-city-or-country-name-here>”  But it’s true: My 5-day SQL Server Internals class, has been deprecated. I will be moving in new directions.

    Those directions are not yet completely set in stone, as I am still working out various possibilities. I will not be stopping all teaching, at least not for another whole bunch of years, but I will be reducing my travelling drastically. And the 5-day classes have a very limited future. Next year will probably be the last year for that class; and I may only teach it two or three times next year.  Here are some of the possibilities I am considering to replace that class:

    • I do have a 3-day reduced version of my class, focusing on data/index storage and query tuning internals, that I have offered as an onsite class for companies that just couldn’t have their database folks take 5 whole days for training. However, in the past that was never offered as a public class. That will change in the coming year.  I’ll announce here on this blog when I actually have the 3-day class scheduled.
    • I will also be starting to offer my shorter one and two day classes as on-site classes. If you have a group of DBAs or developers interested in any of my one-day seminars (most of which have been presented as precons for one conference or another), now is your chance. There is a link on my website that allows you to request information about on-site class.
    • I am considering offering my 5-day internals class as an online class, spread out over many more than 5 days. As some of you know, I used to be a University instructor, at UC Berkeley, and there are things about that model of teaching that I really miss. I would love to just present a single topic at a time, talk about it IN-DEPTH for a couple of hours, and then give some homework exercises relating to that topic. A couple of days later, there would be another class, where we’d go over the homework, and answer any questions that came up (after actually having time to think about what was discussed) and then cover another topic. I think I could cover the same content as in my 5-day class over a period of perhaps 15-20 sessions. There are a lot of details still to be worked out before this can happen, including technical logistics, but I’ve wanted to do something like this for a long time.
    • Other possibilities include working in partnership with other companies, but these discussions are still in the very early stages.


    Of course, I could also win the lottery, and that would open up even more possibilities for what I might do in the future. But I’ve got enough to think about for now, even without that.


    Hopefully, I’ll see lots of you folks next week at the SQL Pass Summit in Seattle!


  • Geek City: Q&A


    I’ve done it again… I’ve neglected my blog so long that my name has been removed from the list of SQLBlog contributors on the right-hand side of the main page.  So I need to fix that.

    I have been accumulating a list of questions that have come up during my 5-day SQL Server Internals classes, that I cannot immediately answer. Sometimes I can figure out the answer myself with just a bit of research or a simple test, and sometimes I can ask one of my Microsoft contacts for help. (Other times, I actually know the answer, but I just space out on it when asked, after being on my feet talking for 5 straight days. Smile )

    I kept hoping that I would eventually get answers to ALL the questions on the list, but I realized I should wait to post what I have. And that will get me back up on this blog site.


    1. Can memory-optimized tables use BPE (Buffer Pool Extensions)

    No, BPE is only for data read from disk and stored in memory buffers. Memory optimized tables do not use pages or buffers.


    2. What if a plan guide references a table that is specified in a query without a schema name, and there are two different tables, in different schemas, with the same name?

    For a plan guide of type ‘SQL’, all that matters is the TSQL-statement that is included in the plan guide. The same plan guide could be invoked when accessing different tables with the same name.


    3. Why does the Native Compilation Advisor complain about the EXEC command?

    In SQL Server 2014, EXEC is never allowed in Natively Compiled Procedures. In SQL Server 2016, we can EXEC a Natively Compiled Procedure or function from within a Natively Compiled Procedure, but we can’t EXEC a non-native module, and we can never EXEC a string.

    4. Can we replicate a memory-optimized table?

    Memory-optimized tables can be subscribers for transactional replication, but that is the only supported configuration. See this MSDN page for more details:


    5. Does auditing work on memory-optimized tables?


    6.  Are the values in sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats reset when an index is rebuilt?

    YES, just as if you had restarted your SQL Server Service.


    7. When do I need to use the option ‘scan for startup procs’?

    This configuration option is related to the procedure sp_procoption, that allows you to mark a stored procedure as a ‘startup’ procedure, to be executed every time your SQL Server service starts. However, the configuration option to tell SQL Server to scan for these procedures doesn’t see too useful.  When the first procedure is marked for startup, this option is turned on automatically, and when the last one is unmarked, it is turned off. So there is really no need for the configuration option, unless you suspect a problem and want to inhibit the startup procedures.


    8. When are SQL Trace and the SQL Profiler going away?

    They’re still available in SQL Server 2016 CTP 2.4, so I assume they will still be in SQL Server 2016 RTM. After that, I think no one knows yet.


    That’s all I’ve got for now.


  • Geek City: What rows have been updated under snapshot isolation?

    So here it is, the bits and bytes post I’ve been promising….

    This post is not about Hekaton (in-memory OLTP).

    This is about the snapshot-based isolation levels that were introduced in SQL 2005, which was called optimistic concurrency at the time. With the advent of Hekaton’s truly optimistic concurrency in SQL Server 2014, we’re not calling it optimistic concurrency anymore. Or maybe we calling optimistic concurrency for reads. True optimistic concurrency has no locks at all, and with disk-based tables (i.e. not memory-optimized tables), SNAPSHOT isolation level and Read Commit Snapshot use locks for data modification operations.

    So DBSI (disk-based tables with one of the snapshot-based isolation levels) allows older committed data to be available by writing it to a part of tempdb called the version store.  There is a DMV that shows you the contents of the version store, and we’ll take a look at it shortly.

    The code below will create a new database called test (dropping it first if it already exists). Of course, you could just use an existing database of your own, which is why I also test for the existence of the table. The table has all fixed length columns, mostly character, so it will be easy to recognized the row contents. After creating the table, and inserting one row, I get the page number for the row using the undocumented fn_physlocformatter function, and take a look at the row. 

    USE master;
    IF db_id('test') IS NOT NULL
       DROP DATABASE test;
    USE test;
    ALTER DATABASE test SET read_committed_snapshot OFF;
    IF object_id('test_table', 'U') IS NOT NULL DROP TABLE test_table;
    CREATE TABLE test_table
      Col1 char(5) NOT NULL,
      Col2 int     NOT NULL,
      Col3 char(3) NOT NULL,
      Col4 char(6) NOT NULL
    INSERT test_table VALUES ('ABCDE', 123, 'Old', 'CCCC');
    SELECT sys.fn_PhysLocFormatter(%%physloc%%), * FROM test_table;

    Here is my result:


    I then look at the page indicated, so you may have to replace the 292 with whatever you got as a page number.

    DBCC TRACEON(3604);
    DBCC PAGE(test, 1, 292, 1);

    Here’s my row contents:


    It’s a bit out of scope for today to describe what the bits and bytes in the row actually mean. I’ve done it before, and every single bit is described in my SQL Server Internals book.

    I enable RCSI (READ COMMITTED SNAPSHOT ISOLATION) for the database, then update the row and look at it again. I also take a look at the version store, before it can be cleaned out.

    ALTER DATABASE test SET read_committed_snapshot ON;
    UPDATE test_table SET Col3 = 'New';
    DBCC PAGE(test, 1, 292, 1);
    SELECT * FROM sys.dm_tran_version_store;

    And here’s the new row:


    Notice that the row is now longer. The first 25 bytes are the same, except for the  the 3 bytes that used to contain ‘Old’ (446c64) are now holding ‘New’ (4e6577). The new row is actually 14 bytes longer, which is the number of bytes that gets added to every row that gets involved with DBSI. There are 8 bytes for the transaction_sequence_number at which the row was updated, and 6 bytes for a pointer to a location in the version store where the previous version of the row can be found.

    And in the version store I see this:


    I’ve chopped down some of the column names, but if you run your own SELECT from the DMV, you can see the full names. The database ID is 10, so you can use the db_id() function to verify what database the row is from. The rowset_id is another name for the hobt_id, which is another name for partition_id. So once you get in the right database, you can verify the object with this query:

    SELECT object_name(object_id)
    FROM sys.partitions
    WHERE partition_id = 72057594040549376;

    The metadata also shows the minimum row length and the actual row length (25 bytes).  And that used to be as far as I went. In class, I would just say that the last column shown was an encoding of the row data. But I had never looked at it closer. Until recently. And then I realized, as you may have done already because of the queries I’ve just run, that the record_image_first_part contains exactly the same byte string as the original version of the row. And that original version of the row is what was saved in the version store when the row was updated. So it’s really not some mysterious ‘encoding’ of the row data… it IS the row data.  So we can not only see the database and the object that the changes are from, but we can also see the rows that have been updated.

    Be warned that the version store can become quite large, and it’s not always useful to be able to analyze every single row that has been updated. But sometimes, you just gotta know.

    So now you do.


  • Did You Know: LOB Parameters


    So still no juicy bits and bytes post, only this short one to tide me over.

    Another question from my Budapest class wasn’t really a question at all. It was pointing out that something was wrong in my slides! And this makes me think back about a blog post from a while back about features that change that can be easily overlooked.

    In the section in my class on using LOB datatypes (text, ntext, image) vs the MAX types (varchar(MAX), etc), I mention that you can’t pass LOB types as a parameter. It was pointed out to me that this ‘fact’ is not true. I was told they use text parameters all the time in their applications. However, when I sat down to test this out, I realized I needed to put the text data into something before passing it as a parameter, and I definitely can’t declare a variable of type text.  So I need to use the varchar(max) type to hold the data to pass in, even though the procedures is created to accept a text type parameter. Here it is:


    USE test;  -- use the name of any testing database you have
    IF object_id('bigdata', 'P') IS NOT NULL DROP PROC bigdata;
    CREATE PROC bigdata (@a text)
      SELECT 'The input parameter is ', datalength(@a), ' bytes long'

    DECLARE @message varchar(max);
    SELECT @message = replicate (convert(varchar(max),'Halleluja '), 100000);
    EXEC bigdata @message;


    So now you (and I) both know!


  • Geek City: Point in Time Recovery for Operations on Memory—Optimized Tables

    In my class in Budapest last week, someone asked if point-in-time recovery was supported for transactions on memory-optimized tables.  I couldn’t think of a reason why not, but I had never tried it. And just because there is not a technical inhibitor to allow certain functionality, it is often the case with a new technology that certain features aren’t included just because it just hasn’t been tested sufficiently.

    But as I thought about it a bit more, I realized there was no way that point-in-time recovery could work for disk-based tables unless it also worked for memory-optimized tables. After all, the transactions are all stored in the same log.

    But, I decided to verify anyway.

    So I created an in-memory database (IMDB) and a memory-optimized table. (If you want to run the code, you’ll need a folder on your C: drive called HKData and one called HKBackups.  Or you can edit the code to change the paths.) I then did an initial database backup of the database with an empty table.

    USE master


    ----- Enable database for memory optimized tables
    -- add memory_optimized_data filegroup

    -- add container to the filegroup
        ADD FILE (NAME='IMDB_mod', FILENAME='c:\HKData\IMDB_mod')

    -- create a memory-optimized table
    CREATE TABLE dbo.t_memopt (
           c1 int NOT NULL,
           c2 char(40) NOT NULL,
           c3 varchar(100) NOT NULL,
           CONSTRAINT [pk_t_memopt_c1] PRIMARY KEY NONCLUSTERED HASH (c1)
            WITH (BUCKET_COUNT = 100000)

           NOUNLOAD,  STATS = 10;

    I then inserted a row into the table, waited a minute, grabbed the time, waited another minute, and inserted another row. I found that if I didn’t put the WAITFORs in, it would get a bit confusing about exactly where I wanted to stop restoring.

    INSERT t_memopt VALUES (1, 'a', REPLICATE ('b', 100));
    WAITFOR DELAY '0:1:0';
    SELECT getdate();
    WAITFOR DELAY '0:1:0';
    INSERT t_memopt VALUES (2, 'c', REPLICATE ('d', 100));
    SELECT * FROM t_memopt;

    I then noted the time captured with the getdate(), saw the two rows returned,  and then made a log backup.

    BACKUP LOG IMDB TO  DISK = N'C:\HKBackups\IMDB-log.bak'
        WITH NOFORMAT, INIT,  NAME = N'IMDB-2-inserts-in-log'

    Now I could test the restore.

    USE master;
        FROM DISK = 'C:\HKBackups\IMDB-FULL.bak'
        FROM DISK = 'C:\HKBackups\IMDB-log.bak'
        WITH RECOVERY, STOPAT = '<insert getdate() value noted above>';

    SELECT * FROM t_memopt;

    When I saw just the one row row returned, I knew the point-in-time recovery had worked!

    So now I can start working on a real juicy bits and bytes type blog post. Hopefully I’ll have it up next week.



  • Geek City: Creating New Tools for Statistics


    I was just starting to work on a post on column statistics, using one of my favorite metadata functions: sys.dm_db_stats_properties(), when I realized something was missing.

    The function requires a stats_id value, which is available from sys.stats. However, sys.stats does not show the column names that each statistics object is attached to. So how can I know which stats_id is the one I want?

    So I looked at sp_helpstats, and it shows the column names, but NOT the stats_id!  I spent some time trying to figure out how to translate the stats_name into the column name, so that I could just use sys.stats. For example, if you see a stats_name of _WA_Sys_0000000C_38B96646, the 38B96646 is the HEX representation of the object_id and the 0000000C is the HEX representation of the column_id. I can find the column name with this information, but it’s not quick and easy, especially not to generalize in TSQL code.

    What DID turn out to be quick and easy was to rewrite sp_helpstats so that it returns the stats_id. It turned out to be an incredibly simple change. I just added an additional column to the temp table created in the procedure, and then added that column to the INSERT into the temp table and the SELECT from the temp table to return the final results. The cursor already fetched the stats_id column to use in its work; it just didn’t return it.

    Note that my script, which I am attaching, has not been thoroughly tested. It has been tested enough for my use when poking around the statistics, and using the sys.dm_db_stats_properties() function.

    Perhaps some of you will find this useful. And if not, at least I’ll know where to find the script next time I need it!


More Posts Next page »

This Blog


Favorite Non-technical Sites or Blogs

Privacy Statement