THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Tibor Karaszi

  • Getting a functional Books Online

    As you probably know, the SQL Server Documentation (aka Books Online or BOL) as of SQL Server 2012 takes us to a web-site. Searching and navigating a web-site when you want rapid and precise help is a horrible experience.

    • Fortunately, there's still a local BOL.
    • Unfortunately, the local BOL for SQL Server 2014 has been broken for a long time.
    • Fortunately, there is a way to fix it. 

    The trick is to not do it how you are supposed to do it. At least not all steps. Here's how to get a functional local BOL:

    1. Download the help files from https://www.microsoft.com/en-us/download/confirmation.aspx?id=42557
    2. Unpack the files somewhere
    3. Open the Manage Help Settings application
    4. "Choose online or local help"
    5.  Select "I want to use local help"
    6. OK
    7. Do not select "Install content from Online". This is where you need to divert from how you (generally) are supposed to do it.
    8. "Install content from disk"
    9. Point to wherever you unpacked the files in step 2 above
    10. "Add" all books
    11. "Update"

    So it is only a matter of downloading the files ourselves instead of letting the Help Library Manager ("Manage Help Settings") tool do it.

    If you wonder in what way the BOL is broken, then it is the Table of Content (TOC) which lacks a lot of subjects (for instance the T-SQL reference lacks bunch of things).

  • Restoreability and SSMS

    I have written about this before, how SSMS generates restore commands that will fail. This post is about showing what it might look like using screenshots. If you always use T-SQL directly to do restore, then you won't be affected by this. But if you expect to be able to perform restore using the restore dialog, then read on.

    The problem
    The issue is that SSMS base a restore sequence on backups that are impossible to restore from. There are two cases I have found:

    Copy only backups
    The purpose of the COPY_ONLY option for the backup command is to do an out-of-bands backup. I.e., you do an ad-hoc backup as a one-time thing, restore this on your test server and then delete the backup file. The copy only option is for you to say that you don’t want to affect your scheduled backups with this ad-hoc backup. We expect SSMS to not base the restore sequence on these backups – since it is likely that the backup file has been deleted.

    Snapshot backups
    If you perform a snapshot of a virtual machine (or “checkpoint” as Hyper-V calls it) then this snapshot might interact with VSS in Windows so you get a consistent state of your SQL Server databases. I.e., when you restore/apply such a snapshot, your databases should be just fine. This is great, as long as it doesn’t mess up the SQL Server backups that we produce. It won’t. But the restore GUI in SSMS can be messed up by such a snapshot.

    The timeline dialog
    The problem seems to occur in the timeline dialog. In the restore GUI, there’s a nifty timeline dialog where you can use a slider to graphically specify at what point in time you want to restore the database to. The GUI uses the backup history information in msdb and based on that figures out what restore operations to perform. This is actually really nice and friendly. As long as it works. The problem is when you specify an earlier point in time, it sometimes uses the incorrect full backup – a full backup which isn’t possible to restore from.

     

    Examples:

     

    RestoreabilityCopyOnlyOK.PNG

    Above, the GUI correctly skips a copy only backup. A full copy only backup was produced after the full backup listed above, but the GUI is smart enough to not base the restore sequence on this copy only backup. This is how it should work.

     

     

     

    RestoreabilityTimeline 

    RestoreabilityCopyOnlyNotOK 

    Above, the GUI incorrectly base the restore on a copy only backup. After using the timeline dialog to point to an earlier point in time, you can see that the GUI now has changed so it bases the restore on this potentially non-existing copy only backup. Not a nice situation to be in if the person doing the restore hasn’t practiced using the T-SQL RESTORE commands.

     

     

     

     RestoreabilitySnapOK

    Above, the GUI correctly skips a snapshot backup. A snapshot backup using VSS was produced after the full backup listed above, but the GUI is smart enough to not base the restore sequence on this snapshot backup. This is how it should work.

     

     

     

    RestoreabilitySnapNotOK.PNG 

    RestoreabilitySnapNotOKTSQL 

    Above, the GUI incorrectly base the restore on a snapshot backup. After using the timeline dialog to point to an earlier point in time, you can see that the GUI now has changed so it bases the restore on the snapshot backup. This is immensely bad since the snapshot doesn’t exist in SQL Server. It is just a placeholder so that SQL Server is aware that a snapshot was performed at that point in time. Look at the RESTORE command it produces!

     

     

    You might wonder how to tell if something is producing VSS snapshots of your machine? You can see that in the errorlog file. Here are a few lines from the errorlog on a machine where I used Hyper-V to produce a proper VSS snapshot (edited for readability):

    2016-03-16 13:30:23.75      I/O is frozen on database Adventureworks2014.
    2016-03-16 13:30:23.75      I/O is frozen on database theBackupTestTib.
    2016-03-16 13:30:23.75      I/O is frozen on database master.
    2016-03-16 13:30:29.33      I/O was resumed on database master.
    2016-03-16 13:30:29.33      I/O was resumed on database theBackupTestTib.
    2016-03-16 13:30:29.34      I/O was resumed on database Adventureworks2014.

    Bottom line
    You already know this. Practice doing restores – as realistically as possible and using different scenarios.

    Disclaimer: Things might change. The tests I did was using SSMS 2014 for the copy only backups and for SSMS 2016 CTP 3.3 for snapshot backups. I have seen this behaviour since SQL Server 2012, though. I wish that this will be fixed in a future version of SSMS, but considering that my requests has gone unnoticed before, I don’t have much hopes. But if you do see something different, let me know and I can re-test. Just make sure to add as much details as possible.

    I used my stored procedure at found here to perform the backups - made the T-SQL used for these tests a bit easier to read:

     

    --Show that restore based on backup history idn't possible
    --is the most recent full backup is missing, even if COPY_ONLY was used.

    --Create the BackupDbWithTs procedure first:
    --http://www.karaszi.com/sqlserver/util_backup_script_like_MP.asp

    --We will do full backups, diff backups and log backups.
    --In between, we will also do a COPY_ONLY full backup (think of it as an ad-hos backup subsequentially deleted).

    ---------------------------------------------------------------------------------------------
    --Drop and create the test database
    --NOTE: This will drop a database named theBackupTestTib is such exists!!!!
    ---------------------------------------------------------------------------------------------
    IF DB_ID('theBackupTestTib') IS NOT NULL
    DROP DATABASE theBackupTestTib
    GO

    CREATE DATABASE theBackupTestTib
    GO

    --Set to full recovery
    ALTER DATABASE theBackupTestTib SET RECOVERY FULL
    GO

    ---------------------------------------------------------------------------------------------
    --Create a table so we can make a modification in the database between each backup
    ---------------------------------------------------------------------------------------------
    IF OBJECT_ID('theBackupTestTib.dbo.t') IS NOT NULL DROP TABLE theBackupTestTib.dbo.t
    GO

    CREATE TABLE theBackupTestTib.dbo.t(c1 INT IDENTITY, c2 CHAR(30) DEFAULT 'Hello backup world')
    GO

    ---------------------------------------------------------------------------------------------
    --Declare and set variables.
    --Adjust to your liking.
    DECLARE
    @i INT = 1
    ,@time_in_between VARCHAR(20) = '00:01:00'    --Time between each backup. 1 sec is fine to test thet restore GUI work.
                                               --Set to higher if you want to use the timeline restore dialog and visually see your backups.
    ,@iterations INT = 2                      --Number of iterations. Adjust as you wish, but do at least two.
    ,@db SYSNAME = 'theBackupTestTib'
    ,@path VARCHAR(200) = 'R:\'                   --Where to put your backup files. Delete backup files after you are finished.

    WHILE @i <= @iterations
    BEGIN
      INSERT INTO
    theBackupTestTib.dbo.t  DEFAULT VALUES;

      
    EXEC MASTER..BackupDbWithTs @db, @path, 'FULL', 'N', 'N', 'N'; INSERT INTO theBackupTestTib.dbo.t DEFAULT VALUES; WAITFOR DELAY @time_in_between
      
    EXEC MASTER..BackupDbWithTs @db, @path, 'LOG ', 'N', 'N', 'N'; INSERT INTO theBackupTestTib.dbo.t DEFAULT VALUES; WAITFOR DELAY @time_in_between
      
    EXEC MASTER..BackupDbWithTs @db, @path, 'LOG ', 'N', 'N', 'N'; INSERT INTO theBackupTestTib.dbo.t DEFAULT VALUES; WAITFOR DELAY @time_in_between

      
    --Perform the COPY_ONLY or snapshot backup now:
      
    EXEC MASTER..BackupDbWithTs @db, @path, 'FULL', 'N', 'N', 'Y'; INSERT INTO theBackupTestTib.dbo.t DEFAULT VALUES; WAITFOR DELAY @time_in_between --COPY_ONLY

      
    EXEC MASTER..BackupDbWithTs @db, @path, 'DIFF', 'N', 'N', 'N'; INSERT INTO theBackupTestTib.dbo.t DEFAULT VALUES; WAITFOR DELAY @time_in_between
      
    EXEC MASTER..BackupDbWithTs @db, @path, 'LOG ', 'N', 'N', 'N'; INSERT INTO theBackupTestTib.dbo.t DEFAULT VALUES; WAITFOR DELAY @time_in_between
      
    EXEC MASTER..BackupDbWithTs @db, @path, 'LOG ', 'N', 'N', 'N'; WAITFOR DELAY @time_in_between

      
    SET @i = @i + 1
    END


  • Misunderstandings about the COPY_ONLY backup option

    The COPY_ONLY option for the backup command never ceases to cause confusion. What does it really do? And in what way does it affect your restore sequence? Or not?

    There are two sides to this. Restoreability and how the GUI behaves:

    Restoreability

    If you specify COPY_ONLY for a full backup, it will not affect the following differential backups. I.e., the following differential backups will be based on  the last full backup which was not performed with COPY_ONLY. Another way of looking at this is that a full backup using COPY_ONLY will not reset the DIFF (also known as DCM) page, which is page 6 and repeated approximately every 4 GB in each database file.

    If you specify COPY_ONLY for a log backup, it will not affect the following log backups. I.e., the log backup chain will be without the one(s) produced using COPY_ONLY. Another way of looking at this is that a log backup using COPY_ONLY does not empty (truncate) the log.

    That's it! COPY_ONLY for a full backup does not, in any way, affect the restoreability for log backups. Say you perform

    FULL_A
    LOG_A
    LOG_B
    FULL_B
    LOG_C

    You can restore FULL_A, LOG_A, LOG_B and LOG_C, regardless of whether you used COPY_ONLY for FULL_B or not! I see misunderstanding about this all the time.

    The restore GUI

    And then we have the restore GUI. I really want to like the GUI, but it... keep disappointing me. For various reasons. To the extent that it frequently becomes useless. I've blogged about it before. I know that many of you readers don't use the GUI when you perform restore. But there are so many "accident" DBAs and less experienced DBA's which expect to be able to use the restore GUI - and they should! But they can't. Enough ranting, time for some details.

    Prior to SQL Server 2012, the restore GUI didn't include COPY_ONLY backups. The design reasoning was probably that you did an out-of-band backup and that backup file is no longer there (you deleted it after restoring to the test server, for instance). Sound reasoning, in my opinion.

    But, as of 2012, the restore GUI includes COPY_ONLY backups. So, say you do:

    FULL_A
    LOG_A
    LOG_B
    FULL_B using COPY_ONLY
    LOG_C

    Also, say that the backup file for FULL_B was deleted. You use the GUI and specify that you want to the restore up until LOG_C. The restore GUI will now select FULL_B and LOG_C, which is a disaster since FULL_B was deleted (it was produced using COPY_ONLY). And you can't make the restore GUI to base the restores on FULL_A. If you don't have the practiced to type your restore commands, your organization (boss?) will not be happy with you now.

    So, why did Microsoft do this change in 2012? My guess is it because of availability groups. If you perform a backup on a read-only replica, you need to use COPY_ONLY. Btw, we have the same situation if you did a VSS snapshot of the machine (which I blogged about before).

    Bottom line? You know this already. Learn how things work, and practice your restore scenarios. Nothing new here.

    (I typed everything above off the top of my head. If there is anything you don't agree with, then please be very specific. I don't mind discussions, I appreciate it. But to be fruitful, it need to be concrete. So, be specific, ideally including TSQL commands, and specifics on how you drive the GUI (if relevant). And, remember that things can change over time; service packs and new versions of SSMS - so please include those details as well. :-) )

  • We are now recommended to install cumulative updates

    Microsoft just released a couple of CUs for SQL Server 2012. What is exiting is how Microsoft phrases their recommendations whether we should install them or not. Below is from the KB of one of those recently released CUs:

    • Microsoft recommends ongoing, proactive installation of CUs as they become available:
    • SQL Server CUs are certified to the same levels as service packs and should be installed at the same level of confidence.
    • Historical data shows that a significant number of support cases involve an issue that has already been addressed in a released CU.
    • CUs may contain added value over and above hotfixes. This includes supportability, manageability, and reliability updates.

    Now, that is a pretty significant change from what they used to say. In addition, requiring the CU is much easier. You just go to MS Download, select whether you want 32 or 64 bit and then download the bits immediately.

    Check it out yourself, go to for instance https://www.microsoft.com/en-us/download/details.aspx?id=50731.

    Or check out how the KB for a new SQL Server CU: https://support.microsoft.com/en-us/kb/3120313 (see the "Notes for the update" section).

  • Managing the errorlog file

    I frequently see recommendations to regularly run sp_cycle_errorlog, so that the errorlog doesn't become huge. My main concern with that is that the errorlog contains valuable information.

    When I do a health check on a SQL Server machine, I want a few months worth of errorlog information available. I typically use my own scripts for this, available here. Almost every time I go through the errorlog, I find valuable information. Some things you address, like find whatever it is that is attempting to login every minute. Other things you might not have control over, but the information is valuable to have.

    So, if you run sp_cycle_errorlog every day or week, you end up with only a week worth, or a few weeks worth of errorlog file information.

    Suggestion 1: Increase the number of errorlog files.

    You probably want more than 6 history errorlog files. For instance, a client of mine told me that he was about to patch a server a few days before I was to visit that client. That patch procedure resulted in enough re-start of SQL Server so we ended up with only 4 days worth of errorlog files. Yes, this client had the default of 6 historic errorlog files. I typically increase this to 15. You can do this by right-clicking the "SQL Server Logs" folder under "Management" in SSMS. If you want to use T-SQL, you can use xp_instance_regwrite, as in:

    EXEC xp_instance_regwrite
     N'HKEY_LOCAL_MACHINE'
    ,N'Software\Microsoft\MSSQLServer\MSSQLServer'
    ,N'NumErrorLogs', REG_DWORD, 15; 

    Suggestion 2: Set a limit for the size of the errorlog file.

    But what about the size? Say that we have crash dumps, for instance. Or other things that start to happen very frequently. The good news is that as of SQL Server 2012, we can set a max size for the errorlog file. There is no GUI for this, so we have to manipulate the registry directly. Again, we can use xp_instance_regwrite. Below will limit the size to 30 MB:

    EXEC xp_instance_regwrite
     N'HKEY_LOCAL_MACHINE'
    ,N'Software\Microsoft\MSSQLServer\MSSQLServer'
    ,N'ErrorLogSizeInKb', REG_DWORD, 30720;

    With 15 files, you can patch of your SQL Server machine without aging out all old errorlog files. And with a max size of 30 MB, you keep each file manageable in size. And you keep the total size of errorlog files for that instance to 450 MB. Not enough to fill your disks. But enough to have historical information for when you are about to perform a health check on your SQL Server instance.

    References: this by Jan Kåre and this by Paul Randal.

  • Can you restore from your backups? Are you sure?

    A few days ago, we were doing restore practice drills with a client. I had tested the stuff before this, so the practice was more for the client's DBAs to test various restore scenarios, with me being able to point to the right direction (when needed), supplement the run-book and stuff like that. Always fun, I like these drills!

    Anyhow, This client does regular SQL Server backups to disk (full, and for some databases also log) at 19:00. They also snap the machines every night at 04:00. We don't want to have dependencies on the machine snap, but it is nice to have in case a machine it totaled and we now can restore from such a snapshot. The issue is that this machine snapshot is seen as a full backup by SQL Server. We all know that a full backup do not affect the log backup chain, but the restore GUI doesn't care about that!

    So the restore GUI suggest that you restore from 04:00 full backup (which isn't a restoreable backup as it was a snapshot) and then the subsequent log backups. What we need to do is to restore from earlier 19:00 full backup, and then all log backups - ignoring the 04:00 snapshot backup.

    Fortunately, my client by themselves (without my intervention) did the restore using T-SQL commands, knowing what backup exists, and figuring out what to restore. But I also wanted them to test the GUI, just so they know how that look like. Of course, you can do a restore from 19:00 to 03:55, and script that to a query window. Then then from 04:00 to current time (or whatever) and script that too,. And then stitch these together. But just typing (with some copy-paste) the commands are much easier.

    My point? Test your restores. Do not expect anything. A production situation is not the right time to try to figure these things and trying to cope with it.

    About this snapshot software: The next version is expected to have an option to produce the snapshot as a COPY_ONLY backup. Isn't that great? Now we expect the restore GUI to skip this COPY_ONLY backup, right? No, that was not that I saw. Having an option to produce the backup as COPY_ONLY finally allow us to implement differential backups, but it (from my tests) won't help with the restore GUI issues. Btw, here is a related post.

    Here's a query that might be helpful if you want to see what type of backups are produced. (I appreciate feedback from anybody if you can see if a snapshot backup sets 1 in the is_snapshot column - I don't have environment to test at the moment...)


     

    SELECT TOP(100)
    database_name
    ,CASE bs.TYPE
       WHEN
    'D' THEN 'Full'
      
    WHEN 'I' THEN 'Differential'
      
    WHEN 'L' THEN 'Log'
      
    WHEN 'F' THEN 'File or filegroup'
      
    WHEN 'G' THEN 'Differential file '
      
    WHEN 'P' THEN 'Partial'
      
    WHEN 'Q' THEN 'Differential partial'
    END AS backup_type
    ,bs.is_copy_only
    ,bs.is_snapshot
    ,DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date) AS backup_time_sec
    ,mf.physical_device_name
    ,bs.database_name
    FROM msdb.dbo.backupset AS bs
      
    INNER JOIN msdb.dbo.backupmediafamily AS mf ON bs.media_set_id = mf.media_set_id  
    ORDER BY backup_finish_date DESC;

     

     

  • Ola Hallengrens Maintenance Solution now supports mirrored backup

    You probably know that you can mirror a backup to several destinations, assuming you are on a supported edition (Enterprise or Developer). This is not the same as striping; you can compare striping to RAID 0, and mirroring to RAID 1.

    Ola now supports mirroring in his maintenance solution, found here. A couple of examples:

    EXECUTE dbo.DatabaseBackup
    @Databases = 'USER_DATABASES',
    @Directory = 'C:\Backup',
    @MirrorDirectory = 'D:\Backup',
    @BackupType = 'FULL',
    @CleanupTime = 24,
    @MirrorCleanupTime = 48

    EXECUTE dbo.DatabaseBackup
    @Databases = 'USER_DATABASES',
    @Directory = 'C:\Backup,D:\Backup',
    @MirrorDirectory = 'E:\Backup,F:\Backup',
    @BackupType = 'FULL',
    @CleanupTime = 24,
    @MirrorCleanupTime = 48

    Note that if any of the destinations are unanavilable, then the backup fails for all destinations. SQL Server do not produce the backup to the ones that are available. This has nothing to do with Ola's solution, it is just how MS decided to implement backup mirroring.

     

  • Updated sp_indexinfo

    It was time to give sp_indexinfo some love. The procedure is meant to be the "ultimate" index information procedure, providing lots of information about all indexes in a database or all indexes for a certain table. Here is what I did in this update:

    • Changed the second query that retrieves missing index information so it generates the index name (based on schema name, table name and column named - limited to 128 characters).
    • Re-arranged and shortened column names to make output more compact and more commonly used column moved to the right.
    • Uncommented some columns that were previously commented. (At least one, filter, has to be commented if you want to run this on 2005.)
    • Added support for columnstore indexes.
    • Decoded the type for columnstore indexes to col-store.

    You find the procedure here. 

  • Does your 3:rd party backup software ruin your SQL Server backup plan?

    Sounds scary, huh? And it surely can be! Here’s an example of one such case:

     

    A client of mine had implemented a 3:rd party backup solution in their VM Ware environment. This shouldn't affect the SQL Server backups. Or that was the plan. I won’t mention what backup software this is, since I’m sure that there are other software also doing strange things. And in the end, it is up to you to verify that your backup and restore strategy is solid. So let us just call this software “BACK”.

     

    BACK performs a snapshot backup of the virtual machine. The client had scheduled this so it happened at about 4 am each day.

     

    Our SQL Server backups were implements so that 7 pm we did a full backup and every hour we did a log backup.

     

    Nothing strange here and these should be independent of each other, right? Wrong!

     

    When looking at the backup history, I realized that SQL Server would see the snapshot backup as a full backup. OK, that will possibly affect out plan to go for differential backups, but it shouldn't affect our transaction log backups. We all know that a database backup doesn't break or affect the log backup chain.

     

    But what I did find was that BACKUP performed a log backup immediately after the snapshot database backup. And the log backup was taken to the file name “nul”. Yes, the binary wasteland.

     

    The end result of this was that the log backups that we performed were usable between 7 pm and 4 am. After that, the log backups are useless. No option to restore anything for the work performed during the working day. It would have been more honest if BACK would set the recovery model to simple instead. That would at least give us a more fair chance to catch the problem (sooner then we did).

     

    Now, we did find an option in BACK to not truncate the log (or whatever they called the checkbox), but by default it did perform this disastrous log backup to nul.

     

    The next step was to consider implementation of the differential backup plan. But time was running out for my assignment so we only managed a quick check. And from what we found out, it seemed that BACK doesn't have an option to produce its snapshot backup to be seen as a COPY_ONLY backup. This means that it prohibits us to implements SQL Server differential backup and saving some 500 GB backup storage produced each week (for only one of the servers). Now, let me leave a disclaimer here since I didn't have time to investigate this much as all, but this is what it looks like at the moment. I will dig deeper into the differential backup strategy next time I’m with this client.

     

    The moral of the story? You already know that. Test your restore strategy. You might just be (unpleasantly) surprised!

  • Using whoami to check for instant file initialization

    Read this if you want to read more about instant file initialization (IFI). In an earlier blog post, I posted a script that uses trace flags, created a dummy-database and then sp_readerrorlog to check for IFI.

    Another option is to use the OS command whoami, as documented here. Below script uses whoami to check for IFI, or more precisely SQL Server having the "Perform Volume Maintenance Tasks" policy. It uses xp_cmdshell, meaning you have to be sysadmin in order to use it.


    IF OBJECT_ID('tempdb..#res') IS NOT NULL DROP TABLE #res
    SET NOCOUNT ON
    GO

    DECLARE
    @was_xp_cmd_on tinyint
    ,@was_show_adv_on tinyint
    ,@is_ifi_enabled tinyint

    --Exit if we aren't sysadmin
    IF IS_SRVROLEMEMBER('sysadmin') <> 1
    BEGIN
      RAISERROR
    ('You must be sysadmin to execute this script', 16, 1)
      
    RETURN
    END

    --Save state for show advanced options
    SET @was_show_adv_on =
    (
    SELECT CAST(value_in_use AS tinyint)
    FROM sys.configurations
    WHERE name = 'show advanced options'
    )

    --Turn on show advanced options, if neccesary
    IF @was_show_adv_on = 0
    BEGIN
       EXEC
    sp_configure 'show advanced options', 1
      
    RECONFIGURE
    END

    --Save state for xp_cmdshell
    SET @was_xp_cmd_on =
    (
    SELECT CAST(value_in_use AS tinyint)
    FROM sys.configurations
    WHERE name = 'xp_cmdshell'
    )

    --Turn on xp_cmdshell, if neccesary
    IF @was_xp_cmd_on = 0
    BEGIN
       EXEC
    sp_configure 'xp_cmdshell', 1
      
    RECONFIGURE
    END

    CREATE TABLE
    #res (col VARCHAR(255))

    INSERT INTO #res(col)
    EXEC xp_cmdshell 'whoami /priv /nh'

    SET @is_ifi_enabled =
    (
    SELECT CASE WHEN PATINDEX('%Enabled%', col) > 0 THEN 1 ELSE 0 END
    FROM
    #res
    WHERE col LIKE '%SeManageVolumePrivilege%'
    )

    IF @is_ifi_enabled = 1
    SELECT 'Instant file initialization is enabled'
    ELSE
    SELECT
    'Instant file initialization is NOT enabled'

    --Reset state for xp_cmdshell
    IF @was_xp_cmd_on = 0
    BEGIN
      EXEC
    sp_configure 'xp_cmdshell', 0
      
    RECONFIGURE
    END

    --Reset state for show advanced options
    IF @was_show_adv_on = 0
    BEGIN
      EXEC
    sp_configure 'show advanced options', 0
      
    RECONFIGURE
    END
  • How often do you rebuild your heaps?

    Never? You are not alone. None of the maintenance solutions I use includes functionality to rebuild a heap, and that includes Maintanance Plans and Ola Hallengren's maintenance solution.

    "So why would you want to rebuild a heap, when it isn't sorted in the first place?", you ask. The answer is to get rid of Forwarding Pointers, and these can really hurt performance by adding lots more logical and physical reads, and random I/O. See for instance this from Kalen Delaney, this from Hugo Kornelis and this from the SQLCAT team.

    SQL Server 2008 added the ALTER TABLE command, with the REBUILD clause. And this is what I'm using in my procedure rebuild_heaps which rebuilds all fragmented heaps on a SQL Server.

    You find the procedure here: http://www.karaszi.com/SQLServer/util_rebuild_heaps.asp.

  • Setting max server memory

    If there is one server setting that is close to universal to configure, then it is probably the "max server memory" setting. The setting is documented here. There are plenty of articles out there on this subject. The purpose for this article is for me to have somewhere to point when I get the question: "What value should I set this to?". I usually refer to Jonathan Kehayias' blog post when I get this question: http://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/. For starters you want a simple formula to begin with, and then some hints on what to monitor if you want to fine-tune the value. Jonathan's articles provide both. The simple formula for how much to reserve for the OS is:

    1 GB
    Plus 1 GB for every 4 GB in the machine, between 4 and 16 GB
    Plus 1 GB for every 8 GB in the machine, above 16 GB

    And here's a TSQL script if you don't want to do the math yourself. Note that you need to specify how much memory you have in the machine.

     


    --Based on Jonathan Kehayias' blog post:
    --http://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/

    IF OBJECT_ID('tempdb..#mem') IS NOT NULL DROP TABLE #mem
    GO

    DECLARE
    @memInMachine DECIMAL(9,2)
    ,
    @memOsBase DECIMAL(9,2)
    ,
    @memOs4_16GB DECIMAL(9,2)
    ,
    @memOsOver_16GB DECIMAL(9,2)
    ,
    @memOsTot DECIMAL(9,2)
    ,
    @memForSql DECIMAL(9,2)
    ,
    @CurrentMem DECIMAL(9,2)
    ,
    @sql VARCHAR(1000)

    CREATE TABLE #mem(mem DECIMAL(9,2))

    --Get current mem setting----------------------------------------------------------------------------------------------
    SET @CurrentMem = (SELECT CAST(value AS INT)/1024. FROM sys.configurations WHERE name = 'max server memory (MB)')

    --Get memory in machine------------------------------------------------------------------------------------------------
    IF CAST(LEFT(CAST(SERVERPROPERTY('ResourceVersion') AS VARCHAR(20)), 1) AS INT) = 9
      
    SET @sql = 'SELECT physical_memory_in_bytes/(1024*1024*1024.) FROM sys.dm_os_sys_info'
    ELSE
       IF
    CAST(LEFT(CAST(SERVERPROPERTY('ResourceVersion') AS VARCHAR(20)), 2) AS INT) >= 11
        
    SET @sql = 'SELECT physical_memory_kb/(1024*1024.) FROM sys.dm_os_sys_info'
      
    ELSE
         SET
    @sql = 'SELECT physical_memory_in_bytes/(1024*1024*1024.) FROM sys.dm_os_sys_info'

    SET @sql = 'DECLARE @mem decimal(9,2) SET @mem = (' + @sql + ') INSERT INTO #mem(mem) VALUES(@mem)'
    PRINT @sql
    EXEC(@sql)
    SET @memInMachine = (SELECT MAX(mem) FROM #mem)

    --Calculate recommended memory setting---------------------------------------------------------------------------------
    SET @memOsBase = 1

    SET @memOs4_16GB =
      
    CASE
        
    WHEN @memInMachine <= 4 THEN 0
      
    WHEN @memInMachine > 4 AND @memInMachine <= 16 THEN (@memInMachine - 4) / 4
        
    WHEN @memInMachine >= 16 THEN 3
      
    END

    SET
    @memOsOver_16GB =
      
    CASE
        
    WHEN @memInMachine <= 16 THEN 0
      
    ELSE (@memInMachine - 16) / 8
      
    END

    SET
    @memOsTot = @memOsBase + @memOs4_16GB + @memOsOver_16GB
    SET @memForSql = @memInMachine - @memOsTot

    --Output findings------------------------------------------------------------------------------------------------------
    SELECT
    @CurrentMem AS CurrentMemConfig
    , @memInMachine AS MemInMachine
    , @memOsTot AS MemForOS
    , @memForSql AS memForSql
    ,'EXEC sp_configure ''max server memory'', ' + CAST(CAST(@memForSql * 1024 AS INT) AS VARCHAR(10)) + ' RECONFIGURE' AS CommandToExecute
    ,'Assumes dedicated instance. Only use the value after you verify it is reasonable.' AS Comment

     

     

    Edit 1 2014-03-06: Got the memory in the machine from sys.dm_os_sys_info, suggested by Ola Hallengren.

    Edit 2 2014-03-20: Adjusted script to work on 2008R2 and lower, as suggested by Shanky. Also added current mem config to output. Changed output from PRINT to SELECT (to facilitate multi-server query window).

    Edit 3 2014-03-22: Adjusted script to support 2005, as suggested by Steve Meder. Also changed to only one resultset.

    Edit 4 2014-05-30: Fixed some bugs for 2005, reported by Lee Linares.

     

  • Do you clean up your Database Mail log tables?

    Database Mail has a couple of log tables in the msdb database. These can become large over time. I've seen MSDB databases over 1 GB in size, where normal size is less than 50 MB (heavy usage of old SSIS deployment model excluded).

    Unfortunately Maintenance Plans do not have built-in functionality for this, nor does Ola Hallengren's excellent maintenance solution ( http://ola.hallengren.com/ ). All you have to do is to schedule an agent job to be executed, say, every week, having one T-SQL jobstep containing: 

    DECLARE @DeleteOlder DATETIME

    SET
    @DeleteOlder = DATEADD(MONTH, -1, CURRENT_TIMESTAMP)

    EXECUTE msdb.dbo.sysmail_delete_mailitems_sp @sent_before = @DeleteOlder

    EXECUTE msdb.dbo.sysmail_delete_log_sp @logged_before = @DeleteOlder

    Above removes mail history older than one month. Adjust to your liking, using the values in the DATEADD function.

    As always, remember to comment your job and to specify appropriate database for the T-SQL jobstep (for documentation purposes, msdb in this case). 

  • Do you want improved performance?

    Can you survive a few lost transactions if your server does a "hard shutdown"? If so, check out SQL Server 2014 and "Delayed Durability".

    A cornerstone in SQL Server's transaction handling has up until 2014 been "durability" for a committed transaction. Durability is by the way the "D" in the ACID acronym: Atomicity, Consistency, Isolation and Durability.

    Durability means that SQL Server has do perform a synchronous write to the LDF file for each transaction. This so that SQL Server can re-construct all committed transactions up until the point of a (potentially hard) shutdown. 

    In SQL Server 2014, MS has planned for a database setting called "Delayed Durability". Setting this means that SQL Server can bath writes to the ldf file, meaning a potentially significant improved performance for applications where you have many small transactions.

    I did a quick test, using a bench from an earlier blog post of mine (http://sqlblog.com/blogs/tibor_karaszi/archive/2008/08/14/are-inserts-quicker-to-heap-or-clustered-tables.aspx) to test what difference I would see for that workload. Roughly (for 50000 rows, on a PC with single spinning disk HD):

    All inserts in one transaction averaged about 0.3 seconds.

    One transaction per row with Delayed Durability set to OFF approx 12 seconds. 

    One transaction per row with delayed durability set to Forced approx 1.2 seconds. 

     As you can see, for this workload we got about a tenfold performance improvement by letting SQL Server batch the write operations to the ldf file. The question is how much improvement you get for your workload and if you can tolerate to lose some modifications in case of a hard shutdown? 

  • Check for Instant File Initialization

    Instant File initialization, IFI, is generally a good thing to have. Check out this earlier blog post of mine f you don't know what IFI is and why it is a good thing: blog. The purpose of this blog post is to provide a simple script you can use to check if you have IFI turned on.

    Note that the script below uses undocumented commands, and might take a while if you have a large errorlog file...

     
    USE MASTER;
    SET NOCOUNT ON

    -- *** WARNING: Undocumented commands used in this script !!! *** --

    --Exit if a database named DummyTestDB exists
    IF DB_ID('DummyTestDB') IS NOT NULL
    BEGIN
      RAISERROR
    ('A database named DummyTestDB already exists, exiting script', 20, 1) WITH LOG
    END

    --Temptable to hold output from sp_readerrorlog
    IF OBJECT_ID('tempdb..#SqlLogs') IS NOT NULL DROP TABLE #SqlLogs
    GO
    CREATE TABLE #SqlLogs(LogDate datetime2(0), ProcessInfo VARCHAR(20), TEXT VARCHAR(MAX))

    --Turn on trace flags 3004 and 3605
    DBCC TRACEON(3004, 3605, -1) WITH NO_INFOMSGS

    --Create a dummy database to see the output in the SQL Server Errorlog
    CREATE DATABASE DummyTestDB
    GO

    --Turn off trace flags 3004 and 3605
    DBCC TRACEOFF(3004, 3605, -1) WITH NO_INFOMSGS

    --Remove the DummyDB
    DROP DATABASE DummyTestDB;

    --Now go check the output in the SQL Server Error Log File
    --This can take a while if you have a large errorlog file
    INSERT INTO #SqlLogs(LogDate, ProcessInfo, TEXT)
    EXEC sp_readerrorlog 0, 1, 'Zeroing'

    IF EXISTS(
              
    SELECT * FROM #SqlLogs
              
    WHERE TEXT LIKE 'Zeroing completed%'
                
    AND TEXT LIKE '%DummyTestDB.mdf%'
                
    AND LogDate > DATEADD(HOUR, -1, LogDate)
            )
      
    BEGIN
        PRINT
    'We do NOT have instant file initialization.'
        
    PRINT 'Grant the SQL Server services account the ''Perform Volume Maintenance Tasks'' security policy.'
      
    END
    ELSE
       BEGIN
        PRINT
    'We have instant file initialization.'
      
    END

More Posts Next page »

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement