This week I have been pretty heads down working with a Microsoft Premier Field Engineer learning how to best customize System Center Operations Manager 2007 R2 to monitor things not included in the SQL Server Management Pack. Based on the training I have received, I spent tonight working on a series of monitors to check that backups have been performed within SLA's for the databases that I manage. While working on the TSQL to pull the information back from SQL Server, I happened to "luck up" and pick an instance that had some serious performance problems pulling the information that I wanted out of SQL Server. It turns out that I had over 1GB of backup history data in MSDB.
So how exactly did this happen? Well, its easier than you might think, and you probably have the same problem in your own environment We use a 3rd party tool for SQL Server backups and while this tool is really good at handling the backups of our databases, it doesn't cleanup the history of the backups that it has performed. Anytime that a database backup completes, it is recorded inside of the MSDB database. You can easily query the dbo.backupsets table to get the last backup information back for every database in a server. However, after 3 years this table can grow to be quite large.
To manage the size of this table a simple grooming process can be put in place via SQL Agent Job that executes a simple script:
DECLARE @oldestdate datetime
SET @oldestdate = GETDATE() - 21
EXEC dbo.sp_delete_backuphistory @oldestdate
This will purge the Backup History from MSDB for any backup older than the date specified for the @oldestdate variable, in this case 21 days. The difference here was that 21 days of data was 42MB instead of the 1GB that originally existed. My scripts executed immediately for this size of data and I have subsequently created a nightly scheduled task to groom my backup history information on all of my servers.