THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

The Rambling DBA: Jonathan Kehayias

The random ramblings and rantings of frazzled SQL Server DBA

Using DDL Triggers to Dynamically Create DML Triggers

This one is from the MSDN Forums on a post titled DDL trigger limitations.  Basically the problem in a nutshell is that an application is harvesting information from the Internet and storing it in SQL Server on what appears to be a timed schedule.  When it stores the information in SQL Server it checks if a tablename exists and performs a DROP/CREATE for the table before storing data into the table.  The problem being posed was how to get the data into a "historical data" table so that existing data isn't lost when the table is being dropped by using a DDL Trigger.  Complete details are available in the post.

A quick fire solution that I came up with while being narrow focused on the question being asked alone was to use a DDL trigger for CREATE_TABLE that executed a Dynamic SQL Statement to create a DML trigger FOR INSERT on the newly created table.  The DML Trigger will then copy the rows from the inserted table into the "historical data" table for permanent storage.  It's really not that complex, and it works to solve the immediate problem.

CREATE TABLE dbo.persistedtable
(rowid INT, somecolumn VARCHAR(100), someothercolumn VARCHAR(100))
GO

CREATE TRIGGER new_table_add_trigger
ON DATABASE
FOR
create_table
AS

SET
nocount ON 

DECLARE
@data xml
SET @data = EVENTDATA()

DECLARE @objname sysname
SET @objname = @data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)')

DECLARE @sql VARCHAR(MAX)
SET @sql = 'create trigger ' + @objname + '_inserted
on '
+ @objname + '
after insert
as
begin

insert into persistedtable (rowid, somecolumn, someothercolumn)
select rowid, somecolumn, someothercolumn
from inserted

end'

EXECUTE (@sql)
GO

CREATE TABLE testtable1
(rowid INT IDENTITY PRIMARY KEY, somecolumn VARCHAR(100), someothercolumn VARCHAR(100))
GO

INSERT INTO testtable1 VALUES ('test1', 'test1')
GO

DROP TABLE testtable1
GO

CREATE TABLE testtable1
(rowid INT IDENTITY PRIMARY KEY, somecolumn VARCHAR(100), someothercolumn VARCHAR(100))

INSERT INTO testtable1 VALUES ('test2', 'test2')
GO

SELECT * FROM persistedtable

However, what I didn't realize when I wrote my response was that an additional response from the original poster was added with some further details that changed my perception of how best to solve the problem.  Essentially there are three major problems with the above solution, though it works.

  1. Triggers are expensive, and are often the last thing you think about when troubleshooting performance problems and irregular activity in your database.
  2. The insert transactions are going to be slower because you are doing a double insert for each record, so expect to take a performance hit. 
  3. The Dynamic SQL code being used to create the FOR INSERT trigger on the table is safe only because it is using controlled data in the way it concatenates the object_name into the SQL command to be executed by the EXECUTE command.  Using Dynamic SQL in this manner with unchecked inputs because SQL Injection can occur which is very dangerous.

Since the process is running on a schedule, I would instead opt for a slightly more complex, but more also more robust solution if I were actually implementing a change of this nature and couldn't change the application behavior itself.  First I would create a table that had a single column for the tablename being created by the application, and then change the DDL Trigger so that all it did was insert the tablename being created into this table.  Then a small stored procedure can be written to read the names from this table if any exist and copy the data into the "historical data" table.

CREATE TABLE ddl_changed_tables
(tablename sysname)
GO

CREATE TRIGGER new_table_add
ON DATABASE
FOR
create_table
AS

SET
nocount ON 

DECLARE
@data xml
SET @data = EVENTDATA()

INSERT INTO ddl_changed_tables (tablename)
VALUES(@data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)'))

GO

CREATE PROCEDURE migrate_new_data
AS
BEGIN

DECLARE
@sql VARCHAR(MAX)
DECLARE @tablename sysname
SELECT @tablename = MIN(tablename)
FROM ddl_changed_tables

WHILE @tablename IS NOT NULL
BEGIN

SET
@sql = 'insert into persistedtable (rowid, somecolumn, someothercolumn)
select rowid, somecolumn, someothercolumn
from '
+ @tablename

EXECUTE(@sql)

DELETE ddl_changed_tables
WHERE tablename = @tablename

SELECT @tablename = MIN(tablename)
FROM ddl_changed_tables

END

END
GO

The benefit here is that the copy process doesn't happen during the initial insert, so the performance of the initial process is not impacted.  The downside is that you could catch a table in a half load state depending on the scheduling of the process actually running.  If I knew more about the problem, this could probably be overcome.

In the end, either of these solutions can be used to solve the problem, but I prefer to avoid the use of DML triggers in design if another process can be developed to accomplish the same task.

Published Thursday, May 28, 2009 7:24 AM by Jonathan Kehayias

Comments

No Comments
Anonymous comments are disabled

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement