THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Eric Johnson

My Thoughts on SQL Server and Database Related Technologies.

  • Calling Child Packages in SSIS

    Calling an SSIS package from another SSIS package is, on the surface, a simple task. You just use the Execute Package Task and point it to a package on a SQL Server or somewhere in the file system, as shown below.

    clip_image002

    It is pretty simply right? But what if you need the parent package to pass information to the child package? That is accomplished by using variables in the parent package and package configurations in the child package. Let’s look at an example. I have created two packages, one named Parent and one named Child. In the parent package I have added a variable call MessageToChild as shown.

    clip_image004

    Then I added the execute package task we looked at earlier to call the child package. Within the child package, I added a variable called Message as shown below.

    clip_image006

    Okay, the next step is to get the Parent variable value into my child variable; this is where package configurations come into play. In the child package, I added a Parent Package Variable package configuration which uses the MessageoChild variable to populate the child’s Message variable.

    clip_image008

    clip_image010

    When the Parent package calls the Child package, the variable value is passed. Now for a quick test, I added a script to the child package with the following code:

    MsgBox(Dts.Variables.Item("Message").Value.ToString, MsgBoxStyle.OkOnly, "Message From Parent")

    This will pop the following message box when the parent package is run and subsequently calls the child package.

    clip_image012

    Any values that you want to pass have to be stored in variables in the parent package. The child package on the other hand can use those values to set any property with a compatible data type. You can set file paths, expression values, connection strings, and, as we saw, variable values. Obviously there are other ways to get packages to “talk” to one another, like writing data to a table or file, but this is a quick and efficient way to get a parent package to send simple bits of information to child packages.

  • Virtual Classes

    I have two online classes starting on November 1st that I recorded on the SSWUG.org vClass platform.

    The first is “Introduction to SQL Server Replication” and is a 12 session class on all things replication. Cost is $199……for more information or to register, click here: http://www.vconferenceonline.com/event/home.aspx?id=107

    The second class is “SQL Server Integration Services” and is also 12 sessions. This class covers basic to advanced topics on SSIS. Cost is $199……for more information or to register, click here: http://www.vconferenceonline.com/event/home.aspx?id=106

    I hope you find these valuable. If you can’t make it in November, the class will start again on December 1st.

  • The SSIS Pivot Transformation

    The Pivot transformation is one of the most useful transformations in SSIS, and also the biggest pain to configure. It comes with one editor, the Advanced Editor, so no pretty GUI here. Let’s walk though an example and hopefully it will help you learn how to use pivot. First off, let’s define the source data and the destination of our pivoted data. We will worked from a small set of survey data where each person was asked four questions, see the table below.

    Question

    Answer

    SurveyID

    Name

    EricJohnson

    1

    Height

    5'11"

    1

    Favorite Color

    Red

    1

    Age

    31

    1

    Name

    Josh Jones

    2

    Height

    5'8"

    2

    Favorite Color

    Black

    2

    Age

    34

    2

    Name

    Eric Beehler

    3

    Height

    5'10"

    3

    Favorite Color

    Green

    3

    Age

    35

    3

    As you can see, there are 3 surveys, each with four questions. We want to pivot this data so that there is one row of data for each survey and the answers are in columns, like the table below.

    SurveyID

    Name

    Height

    FavoriteColor

    Age

    1

    EricJohnson

    5'11"

    Red

    31

    2

    Josh Jones

    5'8"

    Black

    34

    3

    Eric Beehler

    5'10"

    Green

    35

    This is where our Pivot Transformation comes into play. Each source column needs to have a PivotUsage assigned to it in order to tell SSIS how to handle the column during the pivot. The valid values are as follows.

    • 0 - The column is not pivoted
    • 1 - The column is part of the set key. This identifies the grouping of data. In our example, SurveyID will be a 1 to define one row for each survey.
    • 2 - The column is a pivot column. The data in this column will be used to provide the new column names. In our example, each question will become a column, so Question will have a value of 2
    • 3 - The values from this column are placed in columns that are created as a result of the pivot. In our case, each Answer will be placed in the new question columns, so Answer will have a value of 3.

    Now that we have the PivotUsage values, we can configure the Pivot Transformation. When you open the Pivot transformation, the first thing you need to do is define the input columns that you want to work with. This is doen on the Input Columns tabs, as shown. As you can see, we have selected to work with Question, Answer, and SurveyID. We are ignoring the id column as it is a tracking column in the source table and not needed in our destination.

    clip_image002

    Next we switch to the Input and Output Properties tab. Here we need to expand Pivot Default Input and then Input Columns. Now we need to set the PivotUsage value for each column as we previously discussed. Question will be a 2, Answer will be 3 (as shown), and SurveyID will be 1.

    clip_image004

    Now here is where it gets a little weird. We have to add columns to output for each column that will exist AFTER the pivot. In our case, we will have the five columns we saw earlier in our example destination. To add the columns, expand Pivot Default Output and select Output Columns. Next, click the Add Column button 5 times. Then you will need to go through each column and set up a more logical name than the default of Column, Column 1, etc… Select each column and change the Name property so you end up with five columns named SurveyID, Name, Height, FavColor, and Age, as shown below (Note that I called the Favorite Color column FavColor to save space).

    clip_image006

    So now we have an input and an output, now we need to connect the two. This is done by setting the SourceColumn property on each output column to the LineageID property of the correct input column. This tells the transformation where each output columns will get its data. Obviously, the first output column, SurveyID, comes from SurveyID. The value for the other four output columns all come from Answer input column. So for SurveyID, we fine the input column LineageID on the property page here:

    clip_image008

    Now you will need to write down that value or remember it and then manually enter it into the SourceColumn property of the SurveyID output column here:

    clip_image010

    Repeat this step and enter the LineageID from the Answer input column into each of the other output column’s SourceColumn property.

    We are almost there, but we have one more step. For each of our new output columns, Name, Age, FavColor, and Height we need to tell them which values in the Question column signify that the data belongs in each column. This is done by setting the PivotKeyValue for each of these columns to the actual text in the Answer column that’s means this answers is for the question contained in this column. So for the Name column, we enter “Name” into the PivotKeyValue, as shown. This tells the transformation to take the value from the Answer input column and place it in our Name output column anytime it encounters the text “Name” in the Question column.

    clip_image012

    That’s it, now we can work with the pivoted data in our SSIS package just like any other data set. I know that’s a lot and it is very confusing, but if you play with it a few times you will get it. Good luck and happy coding.

  • TSQL TRY…CATCH

    Transact-SQL is a great language for data manipulation, but it has its weaknesses. Unlike “real programming languages” T-SQL is confined to procedural code. Sure, you can build “modules” by using stored procedures and functions, but for the most part, all of the work will be procedural. It has in the past also lacked error handling syntax leaving you with the need to write GOTO statements and labels to control the flow. Well, if you hadn’t noticed, SQL Server 2005 introduced TRY…CATCH blocks to T-SQL. While the implementation in T-SQL is not as robust as that in the object-oriented languages, it’s a good start and its better than GOTO statements. Let’s take a look at how Try...Catch works. Basically, you wrap some portion of your T-SQL code in a TRY block and handle any errors that occur in a CATCH block as shown below.

    BEGIN TRY

    SELECT * FROM dbo.SALES

    SELECT 1/0

    END TRY

    BEGIN CATCH

    PRINT ERROR_MESSAGE()

    END CATCH

    In the example, we have two select statements in our TRY block, and if either of these encounter an error, control will be passed to our CATCH block. In this case, the SELECT 1/0 should cause a Divide by Zero error and pass control to our CATCH block. Once in the CATCH block, you have access to several functions which will provide details of the error.

    • ERROR_MESSAGE()
    • ERROR_NUMBER()
    • ERROR_SEVERITY()
    • ERROR_STATE()
    • ERROR_PROCEDURE()
    • ERROR_LINE()

    These functions can be used to log details of the error that occurred or simply return information to the user that executed the code. If your TRY block completes without error, then control will be passed to the first line after your CATCH block.

    That’s pretty much it. There is some nesting ability with TRY…CATCH but you won’t see any other control flow options like RESUME or THROW like you would in the other programming languages. It’s certainly not a perfect solution, but it is a start.

  • Working with Text Data Sources

    A common source for SSIS packages is the flat file. These are just plain text files that are delimited by some character such as a comma, pipe, or tab. The problem is when you connect SSIS to a flat file it assumes a few things about the file, specifically the data types of all the columns. Flat files do not contain metadata so SSIS has no good way to know the data type. Further, SSIS does not look at the data and attempt to make an educated guess. What you end up with is all the columns configured with a string data type of length 50. This can cause conversion issues if your data is not a sting or truncation errors if you have more than 50 characters. Luckily, there is a solution. Once you have set up your flat file connection, select the Advanced page in the properties dialog box as shown below.

    clip_image002

    From here you can select each column and then change the data type and its associated properties such as length, precision, and scale. You can also change the column’s name which comes in handy when it is not included in the first row of your file. Once you have made all the appropriate changes, SSIS will know what type of data you have and the names of all the columns. It is much easier to make these changes in the connection than to try and use data conversion transformations later in your package. So do yourself a favor and spend a few minutes cleaning up your text data sources, you will be much happier and see fewer data problems later on.

  • Data Conversion in SSIS

    When it comes to converting data from one type to another in SSIS, you have a few choices of methods. We are going to look at a few options and discuss the differences. With all of these, we are assuming that the input data type is compatible with the new output data type. In other words, we are not converting the word “one” to the number 1, this would require fancier logic. So first up, there is the appropriately named Data Conversion tranformation. This task will take one or more input columns and create new columns with the desired data type. It is simple to use, just select your input column or columns, name your output column and chose the new data type, as shown.

    clip_image002

    You can also use the Derived Column transformation to convert data. This transformation is also pretty simple, you select an input column and then chose whether to replace this column or create a new output column. Then you need to apply an expression in order to come up with the new data for the output column. This expression can be a cast statement as shown.

    clip_image004

    As you can see, we have created a new column with a new data type, functionally the same as the Data Conversion transformation. Once thing to note, you can also choose to replace a column but when you do this, SSIS wants the new column to maintain the original data type. As you can see in the next screen shot, even though we have applied to DT_I8 cast, the data is still a four-byte signed integer. This is just something to be aware of.

    clip_image006

    So why use one over the other? Well the Data Conversion transformation will just take an input, convert the type and provide a new output column. If you use the Derived Column transformation you get to apply an expression to the data, which allows you to do more complex conversions or perform additional manipulations on the data. In fact, using IF...THEN logic in an expression, you could convert the word “one” to the number 1 with a Derived Column transformation.

    By no means, are these the only two methods for conversion, but they are the two most common transformations used to convert data. You can also build custom script transformations or even use CAST or CONVERT statements in your source T-SQL if you so want. In the end, the method you chose will depend on your specific requirements, but it is good practice to understand all the options before making a call on which one is the best.

  • SSIS Virtual Class

    I recorded a Virtual SSIS Class with the good folks over at SSWUG and the first airing of the class will by May 15th. This is 100% online so you can do it on your own time and from anywhere. The class will run monthly and I will be available for questions through out. You get the following 12 sessions on SSIS, each about an hour.

    Session 1: The SSIS Basics
    Session 2: Control Flow Basics
    Session 3: Data Flow - Sources and Destinations
    Session 4: Data Flow - Transformations
    Session 5: Advanced Transformations
    Session 6: Advanced Data Sources and Destinations
    Session 7: Using Scripts and Variables in SSIS
    Session 8: Expressions
    Session 9: Package Configurations
    Session 10: Performing Database Maintenance
    Session 11: Debugging and Troubleshooting
    Session 12: Deploying, Transactions, and More

    I start at the beginning for those of you that have never used SSIS and move quickly into some more advanced topics. I feel its a great informative class. For more details, to register or to purchase the DVD version, visit http://www.vconferenceonline.com/shows/vclass/ssis.asp. As I said it will run monthly, so if you can’t make it in May the June class is just around the corner.

    Also, there is a discount code to listeners of my Podcast, I won’t make it too easy, but if you watch one of the latest episodes you should be able to find a discount code worth $10 off.

  • Getting Dynamic in SSIS Queries

    When you start working with SQL Server and SSIS, it isn’t long before you find yourself wishing you could change bits of SQL queries dynamically. Most commonly, I see people that want to change the date portion of a query so that you can limit your query to the last 30 days, for example. This can be done using a combination of expressions and variables. I will do this in two parts, first I will build a variable that will always contain the 1st day of the previous month and then I will dynamically append this date to query and use it in my source.

    So let’s add a variable to a package called QueryDate to hold our 1st day of last month variable. First, set the EvaluateAsExpression property to True. Next we will edit the expression to build our date. We are going to do this by using some date functions to rip a date apart and then concatenate them back together to get the 1st day. First we need to figure out what last month was and return just the month’s number. We do that by using DATEPART and DATEADD as shown below.

    DATEPART("mm",DATEADD("mm", -1, getdate()))

    This starts with DATEADD to subtract a month from today and then DATEPART to extract the month number. So if this runs in March, this portion of our expression returns 2. We also need to do a similar thing to get the year for last month, as shown.

    DATEPART("yyyy",DATEADD("mm", -1, getdate()))

    In DATEADD we still only subtract one month, but now in DATEPART we grab the year. Both of these pieces of code return an integer value, but we need to build a string that can be converted back into a date. So we need to add a cast statement to each of these snippets of code to make them string compatible. This is pretty simple; just prepend the code as shown below to both date pieces we have already built.

    (DT_STR, 50,1252)

    Okay, the hard part is over because we want the 1st of last month so that part we will hard code. Now all we need to do is concatenate some pieces together to get our final date, as shown.

    (DT_STR, 50,1252)DATEPART("mm",DATEADD("mm", -1, getdate()))
    + "\\1\\"
    + (DT_STR, 50,1252)DATEPART("yyyy",DATEADD("mm", -1, getdate()))

    The backslash is a string literal character in SSIS expressions telling it that the next character has some meaning, such as “\n” which means new line. That is why we have to use “\\” in our code to tell the expression we really want a single backslash in our output. Below is a screen shot of this expression in SSIS and the value you get from clicking Evaluate Expression.

    clip_image002

    I ran this in March so the date returned was what I expected. Now all we have to do is put this into a T-SQL query. Just create a second variable (I called mine Query) and build a dynamic string to concatenate your new date variable into your T-SQL code. Here is an example that does just that.

    "select * from dbo.Employee where active_date > '"
    + @[User::QueryDate] + "'"

    As you can see, we have the T-SQL syntax with the QueryDate variable concatenated in the middle. Once again, here is a screenshot of the expression built with the expression evaluated on the bottom.

    clip_image004

    There is one last step to use the query you built. Open your source in the Date Flow and set the Data Access Mode to SQL Command From Variable and then select your query variable in the dropdown. If you did everything correctly, you should see your select statement in the Variable Value window, as shown below. If you don’t see your query, make sure both variables have the EvaluateAsExpression property set to True.

    clip_image006

    Just a couple quick notes: This only works with sources that allow you to write a query to retrieve data, SQL Server Oracle, Access, and even Excel. This will not work with sources like text files that don’t allow you to query their contents. If you want to do something similar with text files, you could BCP the data to a SQL Server staging table first and use that as your source, you just need to be creative. Also, this whole process could be done in one variable; I just like separating the two for maintainability. This allows me to focus on getting the date correct in one variable and getting the query correct in the other. The sky is the limit in how you build dynamic queries, so use your imagination.

  • New T-SQL Functionality in SQL Server 2008

    In my most recent posts I have looked at a few of the new features offered in T-SQL in SQL Server 2008. In this post, I want to take a closer look at some of the smaller additions, but additions that are likely to pack a big punch in terms of efficiency. First let’s talk a little about compound operators. This is a concept that has been around in programming languages for a long time, but has just now found its way into T-SQL. For example, the += operator will add the values to the current variable and then assign the new value to the variable. In other words, SET @ctr += 1 is functionally the same as SET @ctr = @ctr + 1. This shorthand is a little quicker to type and offers a cleaner piece of finished code. The complete list of compound operators is below.

    += Add EQUALS
    -= Subtract EQUALS
    *= Multiply EQUALS
    /= Divide EQUALS
    %= Modulo EQUALS
    &= Bitwise AND EQUALS
    ^= Bitwise Exclusive OR EQUALS
    |= Bitwise OR EQUALS

    Stating in SQL Server 2008, you can now set a variable’s value at the same time you declare it. For example the following line of code will declare a variable named @ctr of type int and set its value to 100. This was previously only possible with parameters, but now it works with all variable declarations.

    DECLARE @ctr int = 100

    Last, but certainly not least, the INSERT statement will accept multiple row predicates on the VALUES clause. In other words, I can insert multiple rows with a single INSERT statement. The following example shows the old syntax and the new multi-row INSERT syntax.

    Pre-SQL Server 2008

    INSERT SALES (customer_id, year, sales_amt) VALUES (1,2007,25000)

    INSERT SALES (customer_id, year, sales_amt) VALUES (1,2008,22000)

    INSERT SALES (customer_id, year, sales_amt) VALUES (1,2009,15000)

    INSERT SALES (customer_id, year, sales_amt) VALUES (2,2007,35500)

    INSERT SALES (customer_id, year, sales_amt) VALUES (2,2008,56800)

    INSERT SALES (customer_id, year, sales_amt) VALUES (2,2009,65600)

    SQL Server 2008 Multi-Row INSERT

    INSERT SALES (customer_id, year, sales_amt)

    VALUES (1,2007,25000),

    (1,2008,22000),

    (1,2009,15000),

    (2,2007,35500),

    (2,2008,56800),

    (2,2009,65600)

    We certainly havn’t covered everything that has been added to T-SQL in SQL Server 2008, but these a are few of the more useful little additions.

  • Script Data in SQL Server 2008

    Here is a feature that I have been waiting for since I started working with SQL Server 6.5; the ability to script the data from a table. When you deploy a SQL Server database, you often need to include data in several tables. This can be for lookup values, configuration tables, or just seed data. In the past, you had to manually write a script to insert the data. You could also write a creative SELECT statement that would include hard coded text to append the ‘INSERT INTO……’ logic into a string and then concatenate the data values. Both of these methods are a bit of a pain. With SQL Server 2008, the Generate Script functionality in SSMS allows us to script the data out of a table into a batch of INSERT statements. Let’s walk through this in detail.

    You need to Right-Click a database and select Tasks | Generate Scripts. This opens the script wizard, as shown, that you use have used for all your SQL Server schema scripting needs. Click Next to get rolling.

    clip_image002

    Select the database that you want to script the data from and click Next. This brings you to the Script Option page and this is where the magic happens. If you scroll down to Table/View Options, as shown, you will see all the script options. Among these options is Script Data. Set that to True and specify any of the other options you want for your script. When you are ready, click Next.

    clip_image004

    From here, the wizard is the same as if you’re not scripting data. You select the object types (tables, views, stored procedures) you want to script followed by screens that let you choose all the specific objects. When you complete the wizard, your script will now include INSERT statements based on the data in the table. Here is an example for the table I scripted.

    INSERT [dbo].[Employee] ([employeeid], [firstname], [lastname], [birthdate], [gender], [managerid], [departmentid], [active_date], [inactive_date])
    VALUES (5, N'Kevin', N'Brown', CAST(0xD0030B00 AS Date), N'M', 20, 2, CAST(0x52320B00 AS Date), NULL)

    INSERT [dbo].[Employee] ([employeeid], [firstname], [lastname], [birthdate], [gender], [managerid], [departmentid], [active_date], [inactive_date])
    VALUES (8, N'Rob', N'Walters', CAST(0x2EF20A00 AS Date), N'M', 5, 2, CAST(0x52320B00 AS Date), NULL)

    Hopefully this saves you some work the next time you need to include insert statements in a deployment script.

  • SQL Server 2008's MERGE Statement

    SQL Server 2008 provided us with some new T-SQL syntax; one of those that I am most excited about is the new MERGE statement. Gone are the days of IF....THEN logic to decide whether a row needs to be inserted, updated, or deleted. The MERGE allows you to take care of the logic and the insert all in one shot. What's more, you can compare an entire record set all at once instead of going row by row. Here's is a quick example of using MERGE.

    MERGE tbl_address AS current_addresses
    USING(
    SELECT customer_objid = address_label, addressline1, addressline2, city, region, country, zipcode, is_deleted
    FROM @addresses)
    AS
    source_addresses(address_label, addressline1, addressline2, city, region, country, zipcode, is_deleted)
    ON
    (current_addresses.address_label = source_addresses.address_label)
    WHEN NOT MATCHED THEN
    INSERT (address_label, addressline1, addressline2, city, region, country, zipcode)
    VALUES (source_addresses.address_label, source_addresses.addressline1, source_addresses.addressline2, source_addresses.city, source_addresses.region, source_addresses.country, source_addresses.zipcode)

    WHEN MATCHED AND source_addresses.is_deleted = 1 THEN
    DELETE

    WHEN MATCHED THEN
    UPDATE
    SET address_label=source_addresses.address_label, addressline1=source_addresses.addressline1, addressline2=source_addresses.addressline2, city=source_addresses.city, region=source_addresses.region, country=source_addresses.country, zipcode=source_addresses.zipcode;

    The USING section defines the "new" data, in this case a table variable. The ON section defines the join between the new and the existing data. Finally you can have a series of MATCHED statements that do things like insert WHEN NOT MATCHED, update WHEN MATCHED, or delete WHEN MATCHED and some other values indicates delete. The possibilities are endless and the syntax is pretty clean. So take the time and experiment with MERGE, it will save you a little effort when conditionally updating and inserting data.

  • Using SQL Server 2008's EXCEPT and INTERSECT Statements

    SQL Server 2008 shipped with several enhancements to the T-SQL language. In a recent blog, we looked at the new MERGE syntax; today we are going to look at the new EXCEPT and INTERSECT syntax. Both of these operators allow you to compare results of two queries. Using EXCEPT, you can find all the rows that exist in one query but not the other. Using INTERSECT, you can find all the rows that are the same in both tables. This makes more sense with an example so let’s take a look. I took a copy of the HumanResources.Employee table from AdventureWorks and made two copies called Employee1 and Employee2. Then to give us different results, I ran the following delete statement.

    DELETE FROM Employee1 WHERE EmployeeID < 10

    DELETE FROM Employee2 WHERE EmployeeID > 300

    This gives us two tables that have matching records as well as unique records that the other table doesn’t have. Now we can explore the statements. First let’s run an EXCEPT comparison.

    SELECT * FROM Employee1

    EXCEPT

    SELECT * FROM Employee2

    This will return the rows in Employee1 that are not also found in Employee2, in this case all the records with an EmployeeID that is greater than 300. Reversing the location of the SELECT statements would return all the rows in Employee2 that were not also found in Employee1, all rows with an EmployeeID less than 10.

    Next let’s use INTERSECT . The following statement will retun all the records that are contained in both queries, in our case all records with an EmployeeID between 10 and 300.

    SELECT * FROM Employee2

    INTERSECT

    SELECT * FROM Employee1

    These are great little additions to T-SQL that allow you to do some quick comparision work on your result sets. With a few twists, you could even write a query that returns all rows from both tables with no match in the other table. Next time you find yourself with data to compare, give EXCEPT and INTERSECT a shot.

  • The Trouble with SSIS Sorting

    Many SSIS transformations, such as the Pivot and the Data Profiling Tasks, require that the data being fed into them be sorted. Without first sorting, some transformations will throw an error and not run, while others will run but the results will not be as expected. The answer is to sort the data before it is fed into the transformation; the problem is that this isn’t as simple as you may think. The obvious and simple way to accomplish this is to use the Sort Transformation in SSIS. The Sort Transformation will complete the objective of sorting the data, but the performance might not be exactly what you would expect. The Sort Transformation does all its work in memory; SSIS packages do things in individual sets. This means your data may need to be sorted and resorted again. If you try to work with large data sets, this sorting will take a long time and use a lot of memory. The solution is to sort the data from the source using an ORDER BY clause. That alone, however, won’t get it done. You also have to tell SSIS that the data is sorted and how it is sorted. This is done in the Advanced Editor of your data source. First, set the IsSorted property of the source’s output to true. Next, set the SortKeyPosition property of each column you specified in your ORDER BY clause; 1 for first, 2 for second, and so on. A SortKeyPosition value of 0 is the default and means that the column is not used in the sort. That’s all there is to it, using this method you will be able to sort your data so the SSIS transformations are happy, and can work without hours of waiting and memory hogging.

  • Understanding the SSIS Package Protection Level

    One property of all SSIS packages that you must understand is the ProtectionLevel. This property tells SSIS how to handle sensitive information stored within your packages. Most commonly this is a password stored in a connection string. Why is this information important? If you don’t set the ProtectionLevel correctly, the package may become unusable. Other developers may be unable to open the package or the package may fail when you go to execute it. Understanding these options lets you get out in front of possible problems and will help you to fix an issue if a problem crops up. In a perfect world, you would not need to store sensitive data, but each and every environment is different. Let’s look at each of the ProtectionLevel options.

    DontSaveSensitive

    When the package is saved, sensitive values will be removed. This will result in passwords needing to be supplied to the package, through a configuration file or by the user.

    EncryptSensitiveWithUserKey

    This will encrypt all sensitive data on the package with a key based on the current user profile. This sensitive data can only be opened by the user that saved it. It another user opens the package, all sensitive information will be replaced with blanks. This is often a problem when a package is sent to another user to work on.

    EncryptSensitiveWithPassword

    Sensitive data will be saved in the package and encrypted with a supplied password. Every time the package is opened in the designer, you will need to supply the password in order to retrieve the sensitive information. If you cancel the password prompt, you will be able to open the package but all sensitive data will be replaced with blanks. This works well if a package will be edited by multiple users.

    EncryptAllWithPassword

    This works the same as EncryptSensitiveWithPassword except that the whole package will be encrypted with the supplied password. When opening the package in the designer, you will need to specify the password or you won’t be able to view any part of the package.

    EncryptAllWithUserKey

    This works the same as EncryptSensitiveWithUserKey except that the whole package will be encrypted. Only the user that created the package will be allowed to open the package.

    ServerStorage

    This option will use SQL Server database roles to encrypt information. This will only work if the package is saved to an SSIS server for execution.

    So that’s it. This option is pretty basic but it is important to understand so that you can be spared unnecessary frustration.

  • Troubleshooting a Failed Maintenance Plan

    I recently ran into an odd little problem with a Maintenance Plan that I wanted to share. I had a plan that was running the Check Database Integrity Task that suddenly started failing. Nothing about the databases had changed and if you ran a manually DBCC CHECKDB all the databases came back clean. The job history showed a meaningless, truncated message:

    Executing query "DECLARE @Guid UNIQUEIDENTIFIER      EXECUTE msdb..sp...".: 100% complete  End Progress  DTExec: The package execution returned DTSER_FAILURE (1).  Started:  10:14:30 AM  Finished: 10:14:31 AM  Elapsed:  1.185 seconds.  The package execution failed.  The step failed.

    When looking at the Maintenance Plan history directly, I had an error that stated: Alter failed for Server 'ejohnsonmobile'. Really nothing at this point has provided any details into a possible fix. Next step was to run a Profiler trace to see what was really going on. When setting up a trace to look for errors, be sure to include the User Error Message and Exception events. Since this is a Maintenance Plan, which is just an SSIS Package, the trace will show you that there is a lot more going on than just the CHECKDB. When looking for errors in the trace I noticed an exception just after the following query was run:

    EXEC sys.sp_configure N'user options', 0 RECONFIGURE

    The exception this caused was:

    Error: 5808, Severity: 16, State: 1
    Ad hoc update to system catalogs is not supported.

    That’s a little odd, the query was modifying the User Options configuration value, why would we see an error about Ad Hocs updates? Here is where the problem becomes clear. At some point, the Allow Updates configuration option was set to 1. In SQL Server 2000 and older, this option allowed users to make updates directly to the system tables. In SQL Server 2005 and newer, the system tables are gone, replaced with the Resource Database and system views. This option is no longer supported in SQL Server 2005 and beyond, and though you can set Allow Updates to 1 with no error, as soon as you run RECONFIGURE, you will receive the error that ad hoc updates are not supported. Because Allow Updates had been set to 1 and my Maintenance Plan runs the RECONGIFURE statement, this error was thrown and the Maintenance Plan failed.

    All I had to do was run “sp_configure 'Allow Updates', 0” to set Allow Updates back to 0 and then everything started working again.

More Posts Next page »
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement