Back in August I posted a first version of a rules-driven solution for backups, index and statistics maintenance and integrity checks. The system in general has been working well, and has saved my team a huge amount of time and effort. We are coming to the anniversary of its use in production soon.
Today I offer an update that contains a few enhancements, performance improvements and a bug fix.
To recap, this is a system or framework to manage many small databases across many instances on many servers in a policy-based/automated way. The details about how to deploy and how to use the system are in the original post, but at a high level:
- First deploy the code across a collection of servers using the included PowerShell script DeployMaintenance.ps1. That will create an administrative database on every instance (or use one that you provide) and populate it with the required tables, procedures, etc. The deployment script will also make a handful of SQL Agent jobs that handle all the maintenance on all the databases.
- Visit the instances. If there is maintenance already in place, disable or remove it so as not to perform this work twice. If necessary, change or override the default preferences for this solution in the admin database for each type of maintenance. This includes the time of maintenance windows, whether to use features like differential backups or Litespeed, whether there are databases that should be treated as exceptions, with different settings, etc.
- Enable the new policy jobs in SQL Agent.
- Monitor to make sure things are running smoothly.
The theory behind this project is to save work configuring servers (efficiency), to ensure that the maintenance is in fact deployed and working for ALL databases in the environment (quality), to ensure that it works in a truly consistent way everywhere (consistency), and to make it so that the servers don’t have to be touched by a DBA when changes happen, such as adding or dropping databases (resiliency).
The updates in this 1.1 version include
- Better reporting of errors. The jobs now write log files to the default error log location for SQL Server.
- Better recovery from errors. The jobs in some cases continue on error – for example, one failed backup will not stop the backup job and leave other databases without backups.
- Performance enhancements for examining backup history and system counters (mainly % log used).
- One bug fix for the log backup logic on a server that uses both log shipping and Litespeed.
The code for the system is posted here, and you are welcome to download it and try it out. Please note that this is not a commercial solution, and that while you may test or deploy this, you do so at your sole risk. Edits to the code almost certainly will be required for your environment. Please read, understand and test the code thoroughly before you even imagine using this in production. I do not want to cause you or your employer any pain. License for using this solution is GPL 3.0: http://opensource.org/licenses/GPL-3.0. You may use and modify this code, but not sell it.
This has been run on SQL Server versions from 2005 to 2008 R2. It probably works on 2012, but I have not tested it thoroughly.
I run only case-insensitive collation; if you are on a case-sensitive system, then you have my sympathy, and you might have to edit a few things to make this work.
There is coverage for Quest Litespeed backups in the solution; other third-party backup products could be incorporated easily, but have not been. It should be possible to follow the pattern used to handle Litespeed and extend it to any other third-party stored procedures.
If you haven’t done index maintenance for a while, and you flip this solution on you might blow up your transaction log. Be sure that you have plenty of log space and plenty of log backup space, and watch the process the first few runs. This is especially true if the combination of Standard Edition of SQL Server and the preference for Online index maintenance applies in your environment, because the maintenance process will perform index maintenance using reorganize (defrag), which produces a lot of log.
Lastly, this is a solution for the type of environment that has many small databases, not the type with a few big, performance intensive databases. Databases that are large and require detailed attention may not be suitable for a one-size-fits-all solution like this.