THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a freelance data mangler in London

SSIS Lookup component tuning tips

Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now.

Scene setting

A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008).

Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory.

However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible.

General tips

Here is a simple tick list you can follow in order to tune your lookups:

  • Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?)
  • Only pick the columns that you need, ignore everything else
  • Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length.
  • Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8.
  • Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause!

Thinking outside the box

It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing:

  • There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow.
  • Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory.
  • Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop.
  • Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition.
  • Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow.
  • Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond.
  • [From Sam Loud in the comments] Sometimes, it's better not to cache your lookup set at all. If you haver a very large, well indexed lookup set, that needs to be accessed by a relatively small number of pipeline rows, you may well be better off using No Cache, and doing the lookup row-by-row.
  • Above all, experiment and be creative with different combinations. You may be surprised at what works.

Final  thoughts

  • If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover.
  • I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect.

If you have any other tips to share then please stick them in the comments.

Hope this helps!

@Jamiet

Published Thursday, March 18, 2010 10:11 PM by jamiet

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Stefbauer said:

Good post!

I would add that cached lookups are CASE sesitive, so be sure to UPPER (or lower) both the input, as well as the thing your selecting from the DB.  Word != word in a cached lookup, no matter how you have your DB set.

Additionally, if you can... use the 2008 cache-file option it absolutely FLIES.

PS voted for the bulk merge... that is a must-have if you ask me.

March 18, 2010 4:35 PM
 

jamiet said:

cool! Thanks Stefan!

March 18, 2010 4:55 PM
 

Sam Loud said:

Super post Jsmie; typically instructive and thorough.

Here's one thing I would add:

Sometimes, it's better not to cache your lookup set at all. If you haver a very large, well indexed lookup set, that needs to be accessed by a relatively small number of pipeline rows, you may well be better off using No Cache, and doing the lookup row-by-row.

March 22, 2010 4:12 AM
 

jamiet said:

Good point Sam, I've added that one in. Thanks.

March 22, 2010 5:01 AM
 

DNS Lookup said:

If you haver a very large, well indexed lookup set, that needs to be accessed by a relatively small number of pipeline rows, you may well be better off using No Cache, and doing the lookup row-by-row.

____________________

Kim

August 4, 2011 1:51 AM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement