THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Greg Low (The Bit Bucket: IDisposable)

Ramblings of Greg Low (SQL Server MVP, MCM and Microsoft RD) - SQL Down Under

Suggest Compression Strategies for Tables and Indexes

This blog has moved! You can find this content at the following new location:

http://blog.greglow.com/index.php/2015/03/06/suggest-compression-strategies-for-tables-and-indexes/

Published Thursday, March 5, 2015 7:56 PM by Greg Low

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Don Schaeffer said:

Recently we implemented TDE on all our client databases.  With encryption compression is no longer effective and the backups are roughly equivalent in size to the used data pages in the database.  Backups have increased dramatically in both size and duration.  To reduce those effects I implemented page compression for any heap or index over 500mb.  For the most part the queries are reports are large updates; there is little in the way of typical OLTP queries.

Do you see big downsides in this?

And do you think there is value in compressing small indexes?

March 9, 2015 5:48 PM
 

Jay said:

Greg - great post!

Can these recommendations be used for both DW and OLTP workloads? Would you recommend row defaults for DW?

March 9, 2015 6:00 PM
 

KKline said:

Very nice, Greg! Thanks for sharing.

-Kev

P.S. What's your opinion of sp_estimate_data_compression_savings?

March 9, 2015 6:44 PM
 

Greg Low said:

Hi Don,

Important to be clear that TDE would mess with backup compression but no issues with table compression, which is what this post is discussing.

For the table compression, you might want to run the script to see what is recommended, rather than a blanket rule on page for anything over 500MB.

March 15, 2015 1:17 AM
 

Greg Low said:

Hi Jay,

Yes, same applies. The issue is not the generic use of the DB (such as DW or OLTP); it's how each table and index is used. For different usage, different recommendations would be generated.

March 15, 2015 1:18 AM
 

Greg Low said:

Hi Kevin,

It works pretty well. It seems to create a new version of the table in msdb based on a table sample. Then it compresses it to see what happened. The accuracy totally depends upon how representative that sample was. For tables that are fairly consistent in nature, the estimate is pretty good.

One thing to note is that I recall a bug when it was first released. My recollection is that the RTM version of 2008 had a bug where that proc would fail for tables that didn't have a clustered primary key. That was quickly fixed.

March 15, 2015 1:21 AM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Tags

No tags have been created or used yet.

Archives

Privacy Statement