THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Allen White

T-SQL Tuesday #015:Remaining on Alert

T-SQL TuesdayThis T-SQL Tuesday is about Automation in SQL Server, and I'll bet you think I'm going to talk about PowerShell.

Well, you're partially right. What I'd like to talk about today, though, is setting up alerts to automate some responses to naturally occuring phenomena on your SQL Servers. (Note: I first introduced this technique in an article on Simple Talk here).

Without going back to the original article, I'm talking about a safety valve process for when your transaction logs start to fill at rates higher than normal, causing them to fill up and cause your log files to expand (which slows down your systems) and cause your transaction log backups to take significantly longer.

Specifically, I like to create a Performance Condition Alert, which monitors the transaction log, and if it exceeds 50 percent of its capacity, it fires a job to back up the transaction log for that database. SQL Server has had this capability since at least SQL Server 7, and perhaps even earlier.

So, the full script is attached to this message, but here's the "meat" of it.

First, connect to the server using SMO, grab the current backup directory and drop a stored procedure in msdb called db_log_bkup.

# Connect to the specified instance
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $inst

# Get the directory where backups are stored
$bkdir = $s.Settings.BackupDirectory

# First create the stored procedure to handle the transaction log backups
$db = $s.Databases['msdb']
$proc = $db.StoredProcedures['db_log_bkup']
if ($proc.Name.Length -gt 0) {
  $proc.Drop()
  }

Next recreate the db_log_backup job in msdb to backup a transaction log for the database passed in as a parameter.

$proc = new-object ("Microsoft.SqlServer.Management.Smo.StoredProcedure") ($db, 'db_log_bkup')
$proc.TextMode = $False
$proc.AnsiNullsStatus = $False
$proc.QuotedIdentifierStatus = $False
$param = new-object ("Microsoft.SqlServer.Management.Smo.StoredProcedureParameter") ($proc, '@database')
$param.DataType = [Microsoft.SqlServer.Management.Smo.Datatype]::NVarChar(50)
$proc.Parameters.Add($param)

$q = @"
declare @strbackup varchar(500), @strDate varchar(30)
set @strDate = CONVERT(varchar, getdate() , 112)
set @strDate = @strDate + Left(CONVERT(varchar, getdate() , 108),2)
set @strDate = @strDate + SubString(CONVERT(varchar,getdate(),108),4,2)
set @strDate = @strDate + SubString(CONVERT(varchar,getdate(),108),7,2)

set @strbackup = '$bkdir' + '\' + @database + '_tlog_' + @strDate + '.TRN'
BACKUP log @database to disk = @strbackup
"@
$proc.TextBody = $q
$proc.Create()

Now, we need to drop any jobs with 'log_bkup' and any alerts with 'threshold' in their names.

# Drop the jobs with names that end with 'log_bkup'
$jobs = $s.JobServer.Jobs
$dropjobs = @()
foreach ($job in $jobs) {
  # Check the name
  if ($job.Name -match 'log_bkup') {
    $dropjobs += $job.Name
    }
  }

if ($dropjobs.Count -gt 0) {
  foreach ($job in $dropjobs) {
    # Get the job, then drop it
    $dropit = $s.JobServer.Jobs[$job]
    $dropit.Drop()
    }
  }

# Drop the alerts with names that end with 'threshold'
$alerts = $s.JobServer.Alerts
$dropalerts = @()
foreach ($alert in $alerts) {
  # Check the name
  if ($alert.Name -match 'threshold') {
    $dropalerts += $alert.Name
    }
  }

if ($dropalerts.Count -gt 0) {
  foreach ($alert in $dropalerts) {
    # Get the alert, then drop it
    $dropit = $s.JobServer.Alerts[$alert]
    $dropit.Drop()
    }
  }

Notice that I created an empty collection using the @() construct. I was then able to add the names of matching jobs/alerts to the collection, then go back later and drop them. We're unable to drop a job or alert while going through the Jobs or Alerts collection because you can't change an object you're enumerating through. Next we'll start iterating through the databases that are available, not system databases, and not in Simple Recovery Model. For each database we'll create a backup job, create a threshold alert, and tie the alert to the backup job.

# Get the databases for the instance, and iterate through them
$dbs = $s.Databases
foreach ($db in $dbs) {
  # Check to make sure the database is not a system database, and is accessible
  if ($db.IsSystemObject -ne $True -and $db.IsAccessible -eq $True) {
    # Store the database name and the current date and time
    $dbname = $db.Name

    # Create the log backup job for the database
    $jname = $dbname + '_log_bkup'
    $j = new-object ('Microsoft.SqlServer.Management.Smo.Agent.Job') ($s.JobServer, $jname)
    $j.Description = 'Threshold Backup for Database ' + $dbname
    $j.Category = '[Uncategorized (Local)]'
    $j.OwnerLoginName = 'sa'
    $j.Create()
    $jid = $j.JobID

    $js = new-object ('Microsoft.SqlServer.Management.Smo.Agent.JobStep') ($j, 'Step 01')
    $js.SubSystem = 'TransactSql'
    $js.Command = "exec db_log_bkup '$dbname'"
    $js.OnSuccessAction = 'QuitWithSuccess'
    $js.OnFailAction = 'QuitWithFailure'
    $js.Create()
    $jsid = $js.ID

    $j.ApplyToTargetServer($s.Name)
    $j.StartStepID = $jsid
    $j.Alter()

    # Build the instance name for setting the performance alert
    $nm = $inst.Split("\")
    $srv = $nm[0]
    if ($nm.Length -eq 1) {
      $instnm = "MSSQLSERVER"
      } else {
      $instnm = $nm[1]
      }
    if ($instnm -eq 'MSSQLSERVER') {
      $srvnm = $instnm
      } else {
      $srvnm = 'MSSQL$' + $instnm
      }
    $iname = $srvnm
    if ($iname -eq 'MSSQLSERVER') {
      $iname = 'SQLServer'
      }

    # Create the performance condition
    $perfcond = $iname + ':Databases|Percent Log Used|' + $dbname + '|>|50'

    # Create the alert
    $alertname = $dbname + '_threshold'
    $alert = new-object ('Microsoft.SqlServer.Management.Smo.Agent.Alert') ($s.JobServer, $alertname)
    $alert.CategoryName = "[Uncategorized]"
    $alert.PerformanceCondition = $perfcond
    $alert.JobID = $jid
    $alert.Create()

    }

  }

And that's it - the script creates the jobs and alerts necessary to watch your transaction log usage and automatically back up the transaction log if it gets too full.

The point of this post is to show that you can actually automate your automation. I like building scripts like this (with the delete of the existing objects) because I can run the script as many times as I'd like, and it cleans up what's there before it starts its work. I could schedule this script to run once a day, so that any new databases added during that day automatically get this attention. Good luck.

Allen

Published Tuesday, February 08, 2011 8:05 PM by AllenMWhite

Attachment(s): alertlog.zip

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

No Comments

Leave a Comment

(required) 
(required) 
Submit

About AllenMWhite

Allen White is a consultant and mentor for Upsearch Technology Services in Northeast Ohio. He has worked as a Database Administrator, Architect and Developer for over 30 years, supporting both the Sybase and Microsoft SQL Server platforms over that period.

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement