THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Allen White

  • PowerShell Precon session at SQL Connections

    Yesterday I had the privilege of presenting the full day training session SPR304-Automate and Manage SQL Server with PowerShell at the SQL Connections conference in Las Vegas.  The session went very well (at least from my perspective) and I think the attendees enjoyed it as well.

    Just the day before the session I got excited about some features of PowerShell I hadn't played with much and decided to add a discussion of them to the presentation, so the material the conference gave them doesn't include that material.  (I also had a minor amount of demo-fail, so I've corrected those scripts as well.)

    I've attached the presentation deck and all the demo scripts to this post, but please, don't ask me for the password. If you were in the room yesterday you know what it is, and as the beer commercial says, "this one's for you."

    Thanks for attending!


  • Speaking About SQL Server

    There's a lot of excitement in the SQL Server world right now, with the RTM (Release to Manufacturing) release of SQL Server 2012, and the availability of SQL Server Data Tools (SSDT). My personal speaking schedule has exploded as well. Just this past Saturday I presented a session called Gather SQL Server Performance Data with PowerShell.

    There are a lot of events coming up, and I hope to see you at one or more of them. Here's a list of what's scheduled so far:

    First, I'll be presenting a session called Automate Policy-Based Management Using PowerShell at the 24 Hours of PASS on March 21.

    The Policy-Based Management feature in SQL Server 2008 provides a great way to ensure your systems are configured consistently and correctly, but it can be tedious to implement on each server in your environment. PowerShell scripts let you automate the implementation of your policies so that you can focus on more important problems. This session walks you through how PBM works, how to define your policies in PowerShell, and how to set up Agent jobs that evaluate those policies regularly and let you know when you need to take action.


    The following week, I'm really excited to be one of the featured presenters at the SQL Connections conference in Las Vegas. This event is also one of the official launch events for SQL Server 2012. At this event I'll be presenting an all day workshop called SPR304: Automate and Manage SQL Server with PowerShell. With PowerShell becoming more and more a key tool in managing all server systems, this workshop will get you up to speed quickly, ready to take control of SQL Server.

    This soup-to-nuts all day workshop will first introduce you to PowerShell, after which you’ll learn the basic SMO object model, how to manipulate data with PowerShell and how to use SMO to manage objects. We’ll then move on to creating Policy-Based Management policies, work with the Central Management Server, manage your system inventory and gather performance data with PowerShell. We’ll wrap up with a look at the new PowerShell cmdlets introduced for SQL Server 2012 and how you can use PowerShell to manage SQL Server 2012 in server environments including the new Always On technology and Windows Server Core. After this one day, you’ll be ready to go to work and able to use PowerShell to make you truly effective.
    In addition, I'll be doing three regular conference sessions.
    SQL226: Scalable Application Design with Service Broker. How your business applications interact with the corporate database directly can have a dramatic impact on how successful you are, and one of the biggest issues is performance with large databases. By designing the application to use asynchronous messaging for non-critical updates you can focus performance concerns to where the timeliness is critical. SQL Server Service Broker handles the asynchronous requests quickly and efficiently, and minimizes the conflicts with your critical updates. This session will walk you through the steps to get Service Broker up and running and providing a consistent delivery of your business data.
    SQL325: Maintain SQL Server System and Performance Data with PowerShell. Maintaining a solid set of information about our servers and their performance is critical when issues arise, and often help us see a problem before it occurs. Building a baseline of performance metrics allows us to know when something is wrong and help us to track it down and fix the problem. This session will walk you through a series of PowerShell scripts you can schedule which will capture the most important data and a set of reports to show you how to use that data to keep your server running smoothly.
    SQL427: Manage SQL Server 2012 on Windows Server Core with PowerShell Windows Server 2008 introduced Server Core, the operating system without the Graphical User Interface. SQL Server 2012 is the first version of SQL Server to support Server Core. To gain the maximum efficiency from your servers you want to make use of Server Core and with PowerShell you can effectively manage SQL Server in that environment. This session will demonstrate how to manage SQL Server 2012 on Windows Server Core and provide scripts to help you perform the common tasks you need to perform on that platform.


    On April 14, I'll be in Atlanta at SQL Saturday 111 in Atlanta, presenting the Manage SQL Server 2012 on Windows Server Core with PowerShell session from SQL Connnections.

    The Akron-Canton Chapter of AITP invited me to speak at their montly meeting on April 24, where I'll be presenting Scalable Application Design with Service Broker.

    I was excited to be selected for this year's PASS SQL Rally conference in Dallas May 10-11, where I'll be presenting the Manage SQL Server 2012 on Windows Server Core with PowerShell session as well.

    Finally, and I am really excited about this opportunity, SQL Sentry asked me to represent them on this May's SQL Cruise to Alaska. The SQL Cruise is a great opportunity for professionals to get focused training on SQL Server from some of the top professionals in the SQL Server industry. If you look at the cost per training day, including hotel and meals, and compare it to the cruise, you'll find that the cruise works out as a better price than traditional training, and you'll get to see some of the most beautiful landscapes in the country! I'll be presenting a four hour version of my PowerShell workshop, and I'd love to see you there.

    Well, that's what's on the schedule for the spring. The summer will be a bit less busy, but I plan to get to a few events over the summer as well. I hope to see you at some of these great training events!


  • Set and Verify the Retention Value for Change Data Capture

    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change:

    sp_cdc_change_job @job_type='cleanup', @retention=7200

    The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify that the change was made. Well, it turns out the values are stored in the msdb database, in a table called dbo.cdc_jobs. I put together this quick query to allow them to test any database with CDC enabled, to find the retention value:

    SELECT [retention]
      FROM [msdb].[dbo].[cdc_jobs]
      WHERE [database_id] = DB_ID()
      AND [job_type] = 'cleanup'

    It's simple, but without knowing where to look, it can be confusing.


  • Load Perfmon Log Data into SQL Server with PowerShell

    In yesterday's post I showed you how I set up performance data gathering in an environment where I can't directly update my analysis database. Today I'll show you how I load that data into SQL Server for the analysis.

    First, I create tables to contain the performance counters in my analysis database. Here's what they look like:

    CREATE TABLE [Analysis].[ServerStats] (
        ServerID    int IDENTITY(1,1) NOT NULL,
        ServerNm    varchar(30) NOT NULL,
        PerfDate    datetime NOT NULL,
        PctProc     decimal(10,4) NOT NULL,
        Memory      bigint NOT NULL,
        PgFilUse    decimal(10,4) NOT NULL,
        DskSecRd    decimal(10,4) NOT NULL,
        DskSecWrt   decimal(10,4) NOT NULL,
        ProcQueLn   int NOT NULL
            [ServerID] ASC
    CREATE TABLE [Analysis].[InstanceStats] (
        InstanceID  int IDENTITY(1,1) NOT NULL,
        ServerID    int NOT NULL,
        ServerNm    varchar(30) NOT NULL,
        InstanceNm  varchar(30) NOT NULL,
        PerfDate    datetime NOT NULL,
        FwdRecSec   decimal(10,4) NOT NULL,
        PgSpltSec   decimal(10,4) NOT NULL,
        BufCchHit   decimal(10,4) NOT NULL,
        PgLifeExp   int NOT NULL,
        LogGrwths   int NOT NULL,
        BlkProcs    int NOT NULL,
        BatReqSec   decimal(10,4) NOT NULL,
        SQLCompSec  decimal(10,4) NOT NULL,
        SQLRcmpSec  decimal(10,4) NOT NULL
            [InstanceID] ASC
    ALTER TABLE [Analysis].[InstanceStats] WITH CHECK ADD  CONSTRAINT [FX_InstanceStats] FOREIGN KEY([ServerID])
    REFERENCES [Analysis].[ServerStats] ([ServerID])
    ALTER TABLE [Analysis].[InstanceStats] CHECK CONSTRAINT [FX_InstanceStats] 

    I also use stored procedures to load the data, so here's that code (pretty straight CRUD process):

    IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[Analysis].[insServerStats]') AND type in (N'P', N'PC'))
        DROP PROCEDURE [Analysis].[insServerStats]
    CREATE PROCEDURE [Analysis].[insServerStats]
               (@ServerID       int OUTPUT
               ,@ServerNm       varchar(30) = NULL
               ,@PerfDate       datetime = NULL
               ,@PctProc        decimal(10,4) = NULL
               ,@Memory     bigint = NULL
               ,@PgFilUse       decimal(10,4) = NULL
               ,@DskSecRd       decimal(10,4) = NULL
               ,@DskSecWrt      decimal(10,4) = NULL
               ,@ProcQueLn      int = NULL)
        DECLARE @ServerOut table( ServerID int);
        INSERT INTO [Analysis].[ServerStats]
        OUTPUT INSERTED.ServerID INTO @ServerOut
        SELECT @ServerID = ServerID FROM @ServerOut
    IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[Analysis].[insInstanceStats]') AND type in (N'P', N'PC'))
        DROP PROCEDURE [Analysis].[insInstanceStats]
    CREATE PROCEDURE [Analysis].[insInstanceStats]
               (@InstanceID     int OUTPUT
               ,@ServerID       int = NULL
               ,@ServerNm       varchar(30) = NULL
               ,@InstanceNm     varchar(30) = NULL
               ,@PerfDate       datetime = NULL
               ,@FwdRecSec      decimal(10,4) = NULL
               ,@PgSpltSec      decimal(10,4) = NULL
               ,@BufCchHit      decimal(10,4) = NULL
               ,@PgLifeExp      int = NULL
               ,@LogGrwths      int = NULL
               ,@BlkProcs       int = NULL
               ,@BatReqSec      decimal(10,4) = NULL
               ,@SQLCompSec     decimal(10,4) = NULL
               ,@SQLRcmpSec     decimal(10,4) = NULL)
        DECLARE @InstanceOut table( InstanceID int);
        INSERT INTO [Analysis].[InstanceStats]
        OUTPUT INSERTED.InstanceID INTO @InstanceOut
        SELECT @InstanceID = InstanceID FROM @InstanceOut

    Now the PowerShell. The target server may be a named instance, so I need to split the instance name on the backslash character, like I showed you yesterday. In this case, though, I'll use the name "MSSQLSERVER" for the instance name because that's the name of the default instance everywhere in SQL Server Land except for the performance counters. Once I've got my instance name ($instnm) and machine name ($mach) then I'm ready to pull in the log file from perfmon. The file is a binary file, but PowerShell 2.0 introduced a new cmdlet called Import-Counter which reads that file format nicely. I'll use that cmdlet and assign the results to the $logcntr variable.

    $srv = $inst.Split("\")
    if ($srv.Count -eq 1) {
    	$instnm = 'MSSQLSERVER'
    	else {
    	$instnm = $srv[1]
    $mach = $srv[0]
    $logcntr = import-counter $logfil

    We have to iterate through the rows in the counter object, and each row has a property called Timestamp, which is the date and time the counter was taken, and a collection called CounterSamples. This contains all the counters gathered at that time. We'll loop through that looking at the Path property to find the counter and match it with the variable we want to use to store the value for the counter. Once we find it (using wildcards, so we don't have to worry about service names) we assign the CookedValue property to the variable we're using for that counter.

    foreach ($cntr in $logcntr) {
    	$dtm = $cntr.Timestamp
    	$clist = $cntr.CounterSamples
    	foreach ($sample in $clist) {
    		switch -wildcard ($sample.Path) {
    			'*% Processor Time*' { $ppt = $sample.CookedValue }
    			'*Available MBytes*' { $mab = $sample.CookedValue }
    			'*Paging File*' { $pfu = $sample.CookedValue }
    			'*Avg. Disk sec/Read*' { $drs = $sample.CookedValue }
    			'*Avg. Disk sec/Write*' { $dws = $sample.CookedValue }
    			'*Processor Queue Length*' { $pql = $sample.CookedValue }
    			'*Forwarded Records/sec*' { $frs = $sample.CookedValue }
    			'*Page Splits/sec*' { $pss = $sample.CookedValue }
    			'*Buffer cache hit ratio*' { $bch = $sample.CookedValue }
    			'*Page life expectancy*' { $ple = $sample.CookedValue }
    			'*Log Growths*' { $log = $sample.CookedValue }
    			'*Processes blocked*' { $blk = $sample.CookedValue }
    			'*Batch Requests/sec*' { $brs = $sample.CookedValue }
    			'*SQL Compilations/sec*' { $cmp = $sample.CookedValue }
    			'*SQL Re-Compilations/sec*' { $rcm = $sample.CookedValue }

    Once we've got the counters correctly loaded into variables we can use the stored procedures created above to load the data into the analysis database. Note that I'm using the Invoke-SQLCMD cmdlet so you'll have to make sure that's been loaded. Also note that I'm returning the identity value from the ServerStats insert into the InstanceStats table so the two can be properly joined in your analysis queries. This design allows me to monitor multiple instances on the same machine and have the counters all properly synchronized when necessary. The @ServerID parameter in the insServerStats procedure is an output parameter, and by assigning the output from the first invoke-sqlcmd cmdlet to a variable called $res (for 'results'), I can get the value by examining the ServerID property of the $res variable.

    	#Send the machine counters to our database
    	$q = "declare @ServerID int; exec [Analysis].[insServerStats]"
    	$q = $q + " @ServerID OUTPUT"
    	$q = $q + ", @ServerNm='" + [string]$mach + "'"
    	$q = $q + ", @PerfDate='" + [string]$dtm + "'"
    	$q = $q + ", @PctProc=" + [string]$ppt
    	$q = $q + ", @Memory=" + [string]$mab
    	$q = $q + ", @PgFilUse=" + [string]$pfu
    	$q = $q + ", @DskSecRd=" + [string]$drs
    	$q = $q + ", @DskSecWrt=" + [string]$dws
    	$q = $q + ", @ProcQueLn=" + [string]$pql
    	$q = $q + "; select @ServerID as ServerID"
    	#write-output $q
    	$res = invoke-sqlcmd -ServerInstance $sqlsrv -Database $destdb -Query $q
    	$SrvID = $res.ServerID
    	#Send the instance counters to the database
    	$q = "declare @InstanceID int; exec [Analysis].[insInstanceStats]"
    	$q = $q + " @InstanceID OUTPUT"
    	$q = $q + ", @ServerID=" + [string]$SrvID
    	$q = $q + ", @ServerNm='" + [string]$mach + "'"
    	$q = $q + ", @InstanceNm='" + [string]$instnm + "'"
    	$q = $q + ", @PerfDate='" + [string]$dtm + "'"
    	$q = $q + ", @FwdRecSec=" + [string]$frs
    	$q = $q + ", @PgSpltSec=" + [string]$pss
    	$q = $q + ", @BufCchHit=" + [string]$bch
    	$q = $q + ", @PgLifeExp=" + [string]$ple
    	$q = $q + ", @LogGrwths=" + [string]$log
    	$q = $q + ", @BlkProcs=" + [string]$blk
    	$q = $q + ", @BatReqSec=" + [string]$brs
    	$q = $q + ", @SQLCompSec=" + [string]$cmp
    	$q = $q + ", @SQLRcmpSec=" + [string]$rcm
    	$q = $q + "; select @InstanceID as InstanceID"
    	#write-output $q
    	$res = invoke-sqlcmd -ServerInstance $sqlsrv -Database $destdb -Query $q

    After running this script for each of your perfmon log files you'll be able to query that data, do aggregate analysis, and more easily find where you're having performance problems, as well as get to know what your baseline is.


  • Setup Perfmon with PowerShell and Logman

    In a session yesterday I was reminded of the power of a tool on every server, Logman, to manage performance data gathering. It's a pretty amazing tool, and I'm starting to get to know how much more it can do than I've even considered, but here's a simple example of what I've done so far.

    First things first, I'm going to leverage the SMO Server object to get the default backup directory, so I can put the perfmon output into a directory where I know it's going to be backed up. The Server.Settings.BackupDirectory property has the full path for the default backup location, so let's grab that.

    $inst = 'SQLTBWS\INST01' # I get this from a command-line argument, but hard coded it for completeness
    $s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $inst
    $bkup = $s.Settings.BackupDirectory

    Next, I'm going to check to see if a subdirectory called SQLPerf under that backup directory exists, and if not, create it.

    # Create the evaluation results directory if it doesn't exist
    if (!(Test-Path -path "$bkup\SQLPerf\"))
    	New-Item "$bkup\SQLPerf\" -type directory | out-null

    Because I get the instance name from a script argument, I don't know if it's a default or named instance. This matters when defining perfmon counters to capture, so we'll "Split" on the backslash character, and if there isn't a second object after the split, the instance is a default instance. In that case the $sqlcntr variable is set to "SQLServer", but for a named instance we need to set $sqlcntr to the instance service name, which is "MSSQL$xxxxx", where xxxxx is the instance name after the backslash. Additionally store the machine name in the $boxnm variable so I can create the output files using that name.

    # Get the instance name to determine if it's a default instance
    $srv = $inst.Split("\")
    if ($srv.Count -eq 1) {
    	$sqlcntr = 'SQLServer'
    	$boxnm = $inst
    	else {
    	$sqlcntr = 'MSSQL$' + $srv[1]
    	$boxnm = $srv[0]

    Next, I'll create a 'here-string' containing the configuration properties to send to Logman. Those properties consist of a list of the counters I want to capture. The only thing that's "special" here is the construct "$($sqlcntr)", which allows PowerShell to replace that construct with the counter service name, and allows us to specify valid counters for perfmon. Once we've created the string in the $cfg variable, we can pipe that to the out-file cmdlet, directing it to the machine name config file in the SQLPerf directory under the default backup directory.

    $cfg = @"
    \Processor(_Total)\% Processor Time
    \Memory\Available MBytes
    \Paging File(_Total)\% Usage
    \PhysicalDisk(_Total)\Avg. Disk sec/Read
    \PhysicalDisk(_Total)\Avg. Disk sec/Write
    \System\Processor Queue Length
    \$($sqlcntr):Access Methods\Forwarded Records/sec
    \$($sqlcntr):Access Methods\Page Splits/sec
    \$($sqlcntr):Buffer Manager\Buffer cache hit ratio
    \$($sqlcntr):Buffer Manager\Page life expectancy
    \$($sqlcntr):Databases(_Total)\Log Growths
    \$($sqlcntr):General Statistics\Processes blocked
    \$($sqlcntr):SQL Statistics\Batch Requests/sec
    \$($sqlcntr):SQL Statistics\SQL Compilations/sec
    \$($sqlcntr):SQL Statistics\SQL Re-Compilations/sec
    $cfg | out-file "$bkup\SQLPerf\$boxnm.config" -encoding ASCII

    Now we've got the config file that logman needs to configure perfmon to our specifications. I call the collection name 'SQLPerf', and you can see the command line options in the TechNet article here. Once the collection is created we run logman again and issue the start command on the collection.

    logman create counter SQLPerf -f bincirc -max 100 -si 60 --v -o "$bkup\SQLPerf\$boxnm" -cf "$bkup\SQLPerf\$boxnm.config"
    logman start SQLPerf

    Now, we don't want this to run forever, so once a day (usually at about 5am) I stop perfmon, move the counter data to a dated subdirectory, then start up perfmon again.

    # Stop Perfmon Counter log
    logman stop SQLPerf
    # Get the current date/time and create the dated subdirectory
    $dt = get-date -format yyyyMMddHHmmss
    New-Item "$bkup\SQLPerf\$boxnm$dt" -type Directory | out-null
    # Move trace files into a dated subdirectory
    Move-Item ("$bkup\SQLPerf\$boxnm*.blg") ("$bkup\SQLPerf\$boxnm$dt")
    # Start Perfmon Counter log
    logman start SQLPerf

    These steps allow me to create a consistent method of gathering performance data where the gathering process can't directly update my analysis database.


  • Getting Job History Correctly

    In my last blog post I walked through a way to grab the duration from the EnumHistory method of the JobServer/Job object. As I worked through getting a solution in place for a client I found that the duration calculation was only part of the problem.

    First, when EnumHistory() is invoked, it runs the sp_help_jobhistory stored procedure. You can create a filter object which will supply the filters you want to the results, but I didn't do that. Given that it's invoked in my script for each job on a server, what I found, surprisingly, was that for each job it called the stored procedure, sending the results into a temporary table, and then selects from that temp table the rows where the job ID is that of the current job. So, it's returning the results for ALL jobs, every time, for each job. If you've got a lot of jobs on your server, that's a lot of data it's returning unnecessarily.

    Another problem I found is when I want to get all history that occurred since a given point in time. The sp_help_jobhistory procedure has parameters to select start date and start time, so it would seem that populating those values would yield the desired results, right? No, it filters them as separate columns, and what you'll get are the history rows where the job was run on or since the specified date, but only those rows where the start time of the job occurred after the specified start time, on any qualifying date. So, if the start date is February 22, 2012 at 22:00, then a job that ran on February 27, 2012 at 17:00 would not appear in your result set.

    (By the way, there's also parameters to the sp_help_jobhistory procedure for JobID and for JobName. Why the EnumHistory method didn't bother to use these is beyond me.)

    In any event, to get the history that occurred since a point in time, in PowerShell invoke the get-date cmdlet, once for the date and once for the time, casting the time as an integer variable, like this (where $gatherdt contains the date/time for your starting point):

    $strtdt = get-date $gatherdt -format yyyyMMdd
    [int]$strttm = get-date $gatherdt -format HHmmss

    Then you can call the sp_help_jobhistory procedure, like this:

    $q = "exec msdb.dbo.sp_help_jobhistory @mode='FULL',@oldest_first=1,@start_run_date='$strtdt'"
    $jobhist = invoke-sqlcmd -ServerInstance $inst -Database 'msdb' -query $q

    You'll get history for all jobs, but only since 00:00 on the starting date. You can then test the results against the start time by ensuring that on the start date, you only process the rows received after the start time like this:

    #Now extract from the job history the results of the jobs run since the last gather date
    [int]$strtdt = get-date $gatherdt -format yyyyMMdd
    $jobhist | foreach {
    	# Filter data occurring before last gather date
    	$ignore = $True
    	$jdt = $_.run_date
    	$jdtt = $_.run_time
    	if (($jdt -gt $strtdt) -or (($jdt -eq $strtdt) -and ($jdtt -gt $strttm))) {
    		$ignore = $False

    (I recast the start date as an integer because that's how the data is returned from the proc.) For each row I set the $ignore variable to $True, get the incoming date and time, then test to see that the date is greater than the start date or that the date is equal to the start date and the time is greater than the start time. If it passes those tests I set $ignore to $False, and that allows me to then process then incoming row.


  • Handle Duration Results from EnumHistory in PowerShell

    Lately I've been very busy with client work, and one of my projects is to capture the history for SQL Agent jobs across all their servers so they can evaluate when to schedule jobs. (Yes, I know SQL Sentry has an excellent product for this, but right now there are budget constraints.) Anyway, I can do this in PowerShell with SMO.

    Once you connect to a server in SMO, the JobServer object has a collection called Jobs. For each job in that collection, you can call the EnumHistory method and it will return both job and job step results, including duration.

    On the Technet page describing the datatable returned when calling EnumHistory, it indicates that the Duration property is the duration of the job or step in seconds. Unfortunately this is blatantly incorrect.

    Let's say a long running job returns a duration value of 93413. Well, converting seconds to hours, minutes and seconds returns 1.01:56:53 (one day, 1 hour, 56 minutes and 53 seconds). But if you look in the View History window for the job in SQL Server Agent you'll see that the job ran in 9 hours, 34 minutes and 13 seconds. Yes, the duration of 93413 translates to 9:34:13.

    I'm not sure who made the decision to record the time this way, because parsing it back out so that it's useful is problematic. I need to get this done, so I wrote this PowerShell function for just that purpose.

    # Function to turn the Duration from EnumHistory() bigint into useable time value
    Function Convert-Duration {
    param ([int]$sec)

    #Now break it down into its pieces
    if ($sec -gt 9999) {$hh = [int][Math]::Truncate($sec/10000); $sec = $sec - ($hh*10000)}
    else {$hh = 0}
    if ($sec -gt 99) {$mm = [int][Math]::Truncate($sec/100); $sec = $sec - ($mm*100)}
    else {$mm = 0}

    #Format and return the time value
    $dur = ("{0:D2}" -f $hh) + ':' + "{0:D2}" -f $mm + ':' + "{0:D2}" -f $sec

    The function takes an integer value as a parameter, and returns a string formatted to be useful as a time string, so this code:

    $sec = 93413
    $dur = Convert-Duration $sec

    returns a result of 09:34:13, which I then use to pass to a stored procedure parameter defined as a datatype of TIME.

    I had to add the

    directive on the division because PowerShell rounds when doing integer division, and I needed it to just truncate fractions.


    I hope that helps you, because with that function my job is much easier.


  • [Speaking] Manage SQL Server 2012 on Windows Server Core with PowerShell

    This evening I'll be presenting a brand new session on installing and managing SQL Server 2012 with PowerShell. Here's the abstract:

    Windows Server 2008 introduced Server Core, the operating system without the Graphical User Interface. SQL Server 2012 is the first version of SQL Server to support Server Core. To gain the maximum efficiency from your servers you want to make use of Server Core and with PowerShell you can effectively manage SQL Server in that environment. This session will demonstrate how to manage SQL Server 2012 on Windows Server Core and provide scripts to help you perform the common tasks you need to perform on that platform.

    Please join us at the Triangle SQL Server User Group meeting in Raleigh, NC for this first look at making your servers more efficient.

    I look forward to seeing you there.


  • T-SQL Tuesday #25 Followup - Just in Time for the Holidays

    T-SQL TuesdayT-SQL Tuesday #25 fell on the same day I brought my mother home from a 3-day emergency trip to the hospital. She's much better, now, but it was a bit crazy there for a little bit. Thanks for everyone's best wishes.

    Rob Farley kicked off the official start of the event with a post titled A T-SQL Tip: Working calculations. He showed us how to use the CROSS APPLY join operator to facilitate date calculations.

    Greg M Lucas followed up with a list of Management Studio shortcuts (not exactly T-SQL, but useful nonetheless) here.

    Pinal Dave (aka the SQL Authority) contributed a few scripts to help you manage your SQL Server configuration with his post SQL SERVER - A Quick Look at Performance - A Quick Look at Configuration.

    We love comic books and super heroes, and Tracy McKibben (RealSQLGuy) fed that love with a couple of links to his performance capture scripts and to Adam Machanic's WhoIsActive script here.

    In her first ever T-SQL Tuesday post, Mala Mahadevan shared with us a number of scripts that provide useful information and automate some processes here.

    Sam Vanga's Tips and Tricks shared with us a way to clear the windows clutter of Management Studio and focus on the T-SQL here.

    On Chris Shaw’s Weblog we learn how to read the SQL Server error logs via T-SQL and manage them across many servers here.

    Using a method called "selective aggregates", Kent Chenery shows us how to efficiently summarize multi-year aggregate data here.

    Ricardo Leca shared a script to create mirrored databases in T-SQL and making use of SQLCMD here.

    A former Clevelander living in Washington, DC, Matt Velic shares a trick to make building referential integrity in databases easier in Easy Integrity.

    Jason Strate showed us a script that you can use to show the various execution plans using a specific index here. In some performance work I'm doing for a client right now this script will really come in handy.

    One of my favorite new people on the SQL Server scene, Jes Schultz Borland published a script to show how much transaction log space is used here.

    Bob Pusateri demonstrates how to concatenate the returned rows from a query into a single string without using a cursor here.

    My good friend John Sterrett has a nifty script to show us what queries are currently running right now in this post.

    John Pertell keeps track of table growth with the scripts in this post.

    Dev Nambi takes advantage of the INTERSECT operator to show how to ensure that query changes made for performance return the same result set here. I'm adding this one to my toolkit right away!

    Some folks, like Richard Lewis (@gogorichie) posted links to earlier posts with their favorite tricks here.

    David Howard points out some of the tricks in using the TOP keyword here.

    One of my friends from the Cleveland group, Colleen Morrow, talked about organizing Registered Servers and how to run queries against multiple servers simultaneously using the groups in the Registered Servers tab in Management Studio here.

    Aside from saying some very nice things about my presentations, Matt Nelson supplied a nifty script to return date range values via a view here.

    Sebastian Meine talks about transaction log reuse waits, and how to determine why the log keeps growing here.

    Nic Cain provides us a script that you can use to find out what Reporting Services reports have failed here.

    Steve Wales also provided links to earlier posts here.

    Jason Brimhall talked about some of the built-in property features in T-SQL here.

    Aaron Bertrand also showed us how to be more productive in Management Studio with his tips here.

    Rob Volk shares ways we can more easily use constraints in T-SQL here.

    Mark Broadbent introduces us to The Best Tool You’ve Never Used. (Not exactly a T-SQL tip, but cool nonetheless.)

    Another fellow Clevelander, Steve Smith, shares a grab bag full of cool tricks here.

    Per the standard rules of T-SQL Tuesday, this ends the event, but there were a few stragglers that came in late:

    Nigel Sammy: here.

    Steve Jones: here.

    Jason Grobe: here.

    Thanks to everyone who participated. I definitely learned some great new ways to work with SQL Server.


  • Speaking about Performance and PowerShell

    I'll be presenting at the South East Michigan SQL Server User Group (SEMSSUG) tonight. The session will be Gather SQL Server Performance Data with PowerShell.

    We all know how important it is to keep a baseline of performance metrics that allow us to know when something is wrong and help us to track it down and fix the problem. We don't always know how to do this easily and consistently. This session will walk you through a series of PowerShell scripts you can schedule which will capture the most important data and a set of reports to show you how to use that data to keep your server running smoothly.

    You can register for the meeting via this link:

    The meeting will be at the Raymond James Southfield Offices, 25900 Telegraph Rd, Southfield, MI 48033. (Actual entrance is on Civic Center Drive, just 100 meters east of Telegraph Rd.) Directions found by clicking here.

    The meeting starts at 6pm and runs until 8pm, and DBSophic is the sponsor, providing the pizza.

    I hope to see you there.


  • T-SQL Tuesday #025 - Invitation to Share Your Tricks

    T-SQL TuesdayIt doesn't seem that long ago that having cool little tidbits of information about SQL Server made a huge difference in how effective you could be. Well, that's still true, but let me give you an example.

    SELECT name FROM sysobjects WHERE sysstat & 4 > 0

    In the early days of SQL Server, this was the way to pull a list of the names of all the stored procedures in your database. The 4 bit in the sysstat column represented stored procedures. (1 represented user tables and 2 represented view, as I recall, so changing the WHERE clause to read WHERE sysstat & 7 > 0 returned all tables, views and stored procedures.)

    As SQL Server has evolved, Microsoft has made it easier to query the metadata to determine what objects existed, adding columns that helped (like 'Type' in this case), catalog views, Dynamic Management Objects, etc.

    So, the challenge for this month's T-SQL Tuesday is: What T-SQL tricks do you use today to make your job easier? (Notice I didn't say PowerShell - I have a bunch of those now, but this is T-SQL Tuesday, not PowerShell Tuesday.)

    Here are the rules:

    1. Your post must go live between 00:00:00 GMT on Tuesday December 13 and 00:00:00 GMT on Wednesday December 14.
    2. Your post must link back to this post, and the link must be anchored from the logo (found above) which must also appear at the top of your post.
    3. Leave a comment or a trackback here on this blog regarding your post.
    4. 'T-SQL Tuesday #025' should be included in the title of the post.
    5. If you're on Twitter, tweet about your post using the hash tag #TSQL2sDay

    I'm looking forward to reading what goodies you're all going to share. After the event closes I'll post a summary of all your contributions.


  • We Are SQLFamily

    On Monday, Tom LaRock (b/@sqlrockstar) presented his #MemeMonday topic as What #SQLFamily Means To Me. The #sqlfamily hash tag is a relatively new one, but is amazingly appropriate.

    I've been working with relational databases for almost 20 years, and for most of that time I've been the lone DBA. The only one to set things up, explain how things work, fix the problems, make it go faster, etc., etc., yadda, yadda, yadda.  I enjoy being 'the guy', but at the same time it gets hard. What if I'm wrong? Is everything I've told people now taken with suspicion? Sometimes that's happened, and it sucks.

    Since I started attending the PASS Summit I've gotten to know a lot more people who do the same thing I do. That's been wonderful, as I have people I can contact if I'm unsure of something.  When I was awarded the MVP Award in July, 2007 I became a part of a smaller community, and really got to know many of the people whose work was instrumental in my development.

    I met Tom at the PASS Summit in 2008, when I was moderating a panel discussion on automating databases. While the discussion was taking place, in front of a full audience, he and Brent Ozar (b/@BrentO) were typing into their laptops furiously, using this thing called Twitter. I was taken aback, and a bit put off. I've always known that when you're 'on stage', the audience comes first, and these guys were, in my mind, distracted and distracting. What I didn't realize until later was that they were communicating with the audience directly.

    Twitter has changed the way we interact with each other. We still work mostly alone, but we have hundreds, even thousands, of people following whatever we choose to share. I can honestly say that I've actually become closer to the SQL community because of it. And yes, Tom, it's become a second family to me.  We share our successes and our failures, we ask questions, we answer questions.

    Not just SQL stuff, though. On my recent trip to the PASS Summit I drove first to Chicago (from Cleveland, where I live) and caught the train to Portland, my first stop on that amazing trip. I love taking the train, but it arrives in Cleveland, in both directions, at about 3am, and there's no public transportation here at that time. It's actually quicker to drive to Chicago than to take the train. The problem was always 'where do I leave my car'. I asked Wendy Pastrick (b|@wendy_dance) and she was kind enough to provide me driveway space for the 2 weeks I was out west. The return train was 5 hours late getting into Chicago, and Wendy went so far as to drive downtown to pick me up! That's family.

    At the end of the week of the PASS Summit, I stayed in Washington to run the Poulsbo Marathon. Kalen Delaney (b/@sqlqueen) lives in Poulsbo and invited me to spend the weekend with her family. She took pictures of me running during the race and crossing the finish line. It was a wonderful weekend, and I truly felt part of her family.

    Are there people in that family I don't get along with? Certainly. We all have family members who we don't care for, but they're still family, and this SQL family is no different. But I'll bend over backwards to help out any of them because I know they'll do the same for me, and that's what family is about.

    (I won't go into details about Tech Ed 2010 in New Orleans, guys. You were there, you know.)

    So, what does #SQLFamily mean to me? A lot. I've got this wonderful, world-wide group of people who all understand what I'm talking about when I talk tech, and who care about me, as I care about them.

    What more could we ask for?


  • For Servers the Command Line Rules

    When I first learned to program, back in the fall of 1968, the first 'word' the computer I connected to said to me was 'READY'.  That summed it up neatly. The computer was ready to do work, it was just waiting for me to give it instruction.

    As I progressed through my knowledge of programming and my focus on providing businesses useful applications for making decisions, whether my interface to the computer was via typewritten prompts on a scroll of paper, green text glowing from a black CRT, on through the current era of graphical user interfaces the fact remains that the computer is still waiting for me to give it instruction. It is READY.

    Along that path I moved from developer (we called ourselves 'programmers' then) through network and communications specialist, to Unix and Windows admin, to database administrator, the focus was always on keeping the computer busy at all times, to keep it from waiting for instructions from us.

    Somewhere along the line, server operating systems were given a graphical user interface (GUI). Why? Don't know, I wasn't the guy making that decision. It goes against the need to keep it busy.  GUIs offer choices. Choices take away from keeping that system busy. A lot of server admins who only learned their craft on Windows Servers seem to only understand how to manage the system via the GUI, and to me that's a shame. They've missed the point.

    In 2006, Microsoft introduced PowerShell, a true shell environment for Windows. HOORAY! I've got the command line environment again (without resorting to the brain-dead cmd.exe) with the great shell features I grew to love when working as a Unix admin, like the pipeline.

    In 2009, Microsoft released PowerShell v2, with it's built-in Integrated Scripting Environment (ISE). PowerShell v2 made great strides over the original v1, and I applauded their efforts. In v2 they added a cmdlet called Out-GridView.  What? Why are you putting a GUI output in my scripting language?

    Windows Server 2008 introduced Server Core, a version of the operating system that doesn't include the GUI. Awesome! We can now focus on getting lots of work done again! An added bonus for me is that SQL Server 2012 will support Server Core, so we don't have to slow down the servers with all that graphical rendering (and it does take considerable resources to accomplish this.)

    Now, PowerShell has great use on the desktop as well as on the servers, and from that aspect, having GUI output does make sense. The ISE is also useful for writing code, though I've spent a lot of years writing a lot of code without using a GUI to do so.  I don't use it because, for the most part, I don't need it. I'd rather have the full command window to see my results than slice my workspace into three parts and work within one of them. But that's me.

    What's my point? There are a wealth of tools, and they're designed to make you or your computer more efficient. Use the ones you find most convenient, that works the way you want to work, and use it. What works for you may not work for me, and vice versa.  The tools I use to get the work done don't define the value of my work, the work I produce does.

    The important thing is to get the most work done, and for me that means working from the command line, also known as PowerShell.exe.

  • PASS Summit 2011 - The Final Day

    Today's the last day of the Summit, and Rick Heiges introduced Rob Farley and Buck Woody, who sang Rob's "Query Sucks" song. As is everything done by these two, it was way too much fun.  Rick also did a retrospective of Wayne Snyder, PASS Immediate Past President. Wayne recognized Rick, who's rolling off the board this fall as well.  Wayne ended with the wish that "as you slide down the banister of life, may the splinters of success stick in your career."

    SQL Rally Nordic is now sold out! SQL Rally Dallas will take place May 10 and 11. Many SQL Saturdays are already on the schedule, and more are coming.  PASS Summit 2012 will take place November 6-9, 2012, with two precon days on November 5 & 6.  All PASS attendees are getting an ebook copy with four chapters from the SQL Server MVP Deep Dives (Volume I) as a thank you for attending.  Rick then introduced Dr. David Dewitt, to talk about Big Data.

    Dr. Dewitt first introduced Rimma Nehme, who's part of his team at the Jim Gray Systems Lab in Madison, Wisconsin. 

    Big Data breaks down to massive volumes of records, whether recorded in relational databases or not. By 2020, we'll be managing data in the range of zettabytes, averaging around 35 ZB.  Data is being generated by automated sources, generating the incredible quantities we're anticipating.  The dramatic drop in the cost of hardware is the reason behind the increase in the amount of data we keep.

    eBay uses a Parallel data system, averaging about 10 PB on 256 nodes, while Facebook and Bing use NoSQL systems, managing 20PB and 150PB, respectively.  He told us that NoSQL doesn't mean that SQL is dead, it means Not Only SQL - other systems will manage data as well as relational systems.  It incorporates more data model flexibility, relaxed consistency models such as eventual consistency, low upfront costs, etc.

    The NoSQL model implements a model where data is just stored on arrival, without data cleansing or staging, and the data is evaluated as it arrives.  It can use a Key/Value Store method, as in MongoDB, CouchBase, etc, where the data model is very flexible and supports single record retrievals through a key, and other systems like Hadoop, which Microsoft is now supporting.  In relational systems there is a structure, where NoSQL uses an "unstructured" model.  Relational systems provide maturity and reliability, and NoSQL systems provide flexibility.

    This is NOT a paradigm shift. SQL is NOT going away. (Codasyl to relational in the 70s was a paradigm shift.) Businesses will end up with data in both systems.

    Big Data started at Google, because they had massive amounts of click stream data that had to be stored and analyzed. It had to scale to petabytes and thousands of nodes. It had to be fault tolerant and simple to program against.  They built a distributed file system called GFS and a programming system called MapReduce.  Hadoop = HDFS + MapReduce.  Hadoop & MapReduce makes it easy to scale to high amounts of data, with fault tolerance and low software and software costs.

    HDFS is the underpinnings of the entire ecosystem. It's scalable to 1000s of nodes, and assumes that failures are common. It's a write once, read multiple times, uses a traditional file system and is highly portable.  The large file is broken up into 64MB blocks and stored separately on the native file system.  The blocks are replicated so that hardware failures are handled, so that block 1 after being written on its original node, will also be stored on two additional nodes (2 and 4). This allows a high level of fault tolerance.

    Inside Hadoop there's a name node, which has one instance per cluster. There's also there's a backup node in case the name node has a failure, and there are data nodes.  In HDFS the name node is always checking the state of the data nodes and ensuring that the data nodes are alive and balanced.  The application has to send a message to the name node to find out where to put the data it needs to write. The name node will report where to place the data, but then gets out of the way and lets the application manage the data writes.  Data retrieval is similar in that it asks the name node where the data lives, then gets it from the nodes where it's written.

    Failures are handled as an intrinsic part of HDFS. The multiple writes always ensure that the data is stored on nodes on multiple devices so that even rack or switch failures allow access to the data on another device that's still available.  When additional hardware is added, the data nodes are rebalanced to make use of it. HDFS is highly scalable, doesn't make use of mirroring or RAID but you have no clue where your data really is.

    MapReduce is a programming framework to analyze the data sets stored in HDFS. Map pulls in the data from the smaller chunks, then Reduce analyzes the data against each of the smaller chunks until the work is done. There's a JobTracker function which manages the workload, then TaskTracker functions which manage the data analysis against all the blocks.  The JobTracker task lives on top of the Name Node, and the TaskTracker tasks live on the systems with the Data Nodes.

    The actual number of map tasks is larger than the number of nodes existing. This allows map tasks to handle work for tasks that fail. Failures are detected by master pings.  MapReduce is highly fault tolerant, relatively easy to write and removes the burden of dealing with failures from the programmer.  The downside is that the schema is embedded in the application code. There is no shared schema, and there's no declarative query language.  Both Facebook's HIVE language and Yahoo's PIG language use Hadoop's MapReduce methodology in their implementations.

    Hive introduces a richer environment that pure MapReduce and approaches standard SQL in functionality. Facebook runs 150K jobs daily, and maybe 500 of those are pure MapReduce applications, the rest are HiveQL.  In a side-by-side comparison, a couple of standard queries ran about 4 times longer than the same queries using Microsoft's PDW (next release, not yet available.)

    Sqoop provides a bridge between the world where unstructured data exists and the structured data warehouse world.  Sqoop is a command line utility to move data between those two worlds.  Some analyses are hard to do in a query language and are more appropriate for a procedural language, so moving data between them makes sense.  The problem with sqoop is that it's fairly slow.

    The answer is logically to build a data management system that understands both worlds.  Dr. Dewitt terms this kind of system an Enterprise Data Manager.  Relational systems and Hadoop are designed to solve different problems. The technology complements each other and is best used where appropriate.

    It's so wonderful that PASS brings Dr. Dewitt to help us get back to the fundamental basics behind what we do every day.  I love the technical keynotes and really wish Microsoft would learn that marketing presentations aren't why we're here.

    It's time to get on to the next session, but this has been a great PASS Summit.


  • PASS Summit 2011 Day 2 - Women In Technology & SQLKilt Day

    The second full day at the Summit is the day we recognize Women in Technology, and (merely by coincidence) the day we wear kilts to the event. This latter tradition started two years ago when Grant Fritchey, Steve Jones and Bill Fellows wore their kilts to the conference.  Last year close to forty of us wore them, and it's grown beyond that this year.  I'm sitting in the ballroom watching people walk in for the keynote, many in kilts, and it's awesome.

    Bill Graziano came out to start the keynote wearing a kilt. Before the keynote started Bradley Ball took this picture of Bill, Grant Fritchey and me.

    Kilt Day

    He went on to acknowledge the volunteers that make PASS possible.  This year's PASSion award went to Lori Edwards, who managed the Program Committee, and I worked with her on the Election Review Committee, and got to see first hand how hard she works for PASS. 

    Bill then shared some basic information on the financial progress PASS has been making, and the growth of PASS through both the Summit and through other avenues such as SQL Saturday and Virtual Chapters.

    The keynote speaker this morning is Quentin Clark from Microsoft. He first revisited Ted's goals from yesterday's keynote. He identified the "Fantastic 12 of SQL Server 2012", including

    • Required 9s & Protection
    • Blazing Fast Performance
    • Rapid Data Exploration
    • Managed Self-Service BI
    • Credible, Consistent Data
    • Organizational Compliance
    • Peace of Mind
    • Scalable Data Warehousing
    • Fast Time to Solution
    • Extend Any Data, Anywhere
    • Optimized Productivity
    • Scale on Demand

    Bob Erickson, Executive Vice President for Mediterranean Shipping Company, came out to talk about how the Always On Technology keeps his mission-critical processing going. Note - when you're doing demos the audience has to be able to see the details on the screen.  (Zoom-it is your friend.)

    Quentin talked about Vertipaq and using the ColumnStore index to dramatically improve performance for BI applications.  He mentioned Power View & Power Pivot, and the improvements in accessibility of data through those tools.

    Lara Rubbelke came out to show SharePoint and data warehouse reporting performance. She showed the creation and use of the ColumnStore index and how the Data Quality Services enhancements dramatically improved the quality and performance of queries behind critical reports.

    They also showed the Parallel Data Warehouse appliance offerings from both Dell and from HP.  (Most of my experience has been in fairly small shops, so I've never seen a good business case for these offerings, but large companies appear to be excited about it.)

    They've announced ODBC drivers for Linux, and Change Data Capture for both SSIS and Oracle. They've also added objects to the Beyond Relational space, including FileTable, 2D Spatial and Semantic Search. Michael Rys came out to show the Statistical Semantic Search (and he'd tweeted that he was not going to use any Excel in his demos!)  His demo was interesting and I'm looking forward to seeing business use cases for Semantic Search. It really looked like an interesting tool to use.

    Nicholas Dritsas came out to show a good integration between SQL Server and SQL Azure. Yesterday I attended a session with Buck Woody and Kevin Kline on using Azure to scale corporate applications, and it's making more and more sense.  It appears they've made significant progress in adding the capabilities to SQL Azure to match the technical capabilities already existing in the standard relational platform. He was followed by Cihan Biyikoglu to show the elastic scaling capabilities of SQL Azure federations.  OK, the dependency walker (and the new execution plan view) in the management console looks really interesting.

    Keynotes are by design marketing oriented, and it's impractical to get into any real detail in a forum like this. Reports and graphs seem to be the tools they use to excite people but I'd like better examples.

    OK, time to head to my session room and prepare for my session.



This Blog


Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement