THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Page Free Space

See also my SQL Server technical articles on

Deletes that Split Pages and Forwarded Ghosts

Published Friday, August 31, 2012 6:09 AM by Paul White

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



AlexK said:


Have you checked if the presence of an OUTPUT clause in a DML command has the same effect as a trigger?

That's a really interesting read, I enjoyed it, thanks!

August 30, 2012 10:26 PM

Paul White said:

Hi Alex,

No the OUTPUT clause doesn't have the same effect.  Columns from the inserted and deleted tables used with the OUTPUT clause are provided by the execution plan, it doesn't use the version store.  Great question, though!


August 31, 2012 12:44 AM

Niko Neugebauer said:

Great info, Paul!

As always, there are so many cases when some "insignificant" changes of the internal structure affect the DB performance in such a big way.

August 31, 2012 12:14 PM

Patrick Ge said:

Thanks Paul. As always, great stuff. Very detailed explanation and excellent demostration!

Just in case you or other people are not aware of, Paul Randal also touched on this topic a little bit in his this post:

September 13, 2012 7:15 PM

Mark Cleary said:

Minor typo - I think you meant to write the sparse row limit is 8016, not 8616. I think the actual value is 8018 though BOL also mentions 8019.

September 14, 2012 6:55 PM

Paul White said:

Thanks Mark - 8616 was indeed a fat-finger issue.  Updated the post with the vague range of numbers (still can't be bothered to check it properly!)


September 14, 2012 7:10 PM

Mark Cleary said:

I just tested this with nvarchar(max) and the results seem to be worse. I'd assumed that the algorithm would take the max in-row size for column02 as 4000 and the results would be the same.

Instead when using the 20 character column01, I finished up with 469 pages and the free space was mostly 4428 to 4467 bytes.

This is on SQL Server 2012 (32 bit).

We have a table where we added an audit column where we record changes made by data cleaning processes. This uses varchar(max) and would appear to be a bad bad thing.

September 14, 2012 10:44 PM

Paul White said:

Hi Mark,

The maximum in-row size for nvarchar(max) is 8000 bytes not 4000 (Unicode characters use two bytes each).  So your results are pretty much as expected.

If your table with the audit column has a trigger (and I assume it does - for auditing!) then yes you will be experiencing this phenomenon.

Using a suitable FILLFACTOR on your table will help avoid splitting and fragmentation.  To see this, add the following statement to the test rig just after the INSERT that loads the data:



September 15, 2012 2:45 AM

Mark Cleary said:

I don't think you understood what I said.

The maximum in row size for nvarchar(max) is 4000 characters which is the same as your sample using where column02 is nvarchar(4000). Both reserve 8000 bytes. I assumed nvarchar(max) would produce similar results where column01 being nvachar(20) doesn't page split but nvarchar(22) does.

I've been playing with fill factor and it does eliminate most page splits. We aren't using triggers.

September 15, 2012 2:53 AM

Paul White said:

Hi Mark,

Sorry if I misuderstood you.  Anyway, nvarchar(max) is a LOB type.  As I say in the main text:

"There is a performance optimization that can avoid adding row versioning information, but only if the table cannot generate ROW_OVERFLOW or LOB allocation units.  This means that the definition of the table must not allow for LOBs or for the possibility of variable length columns moving off row."

So the presence of a type that might generate LOB allocation units ensures row versioning information is added, resulting in splits (regardless of potential row size issues).


September 15, 2012 8:44 AM

Mark Cleary said:

Ah,yes I did read the bit about the optimisation and promptly forgot it.

We've moved to a design where the audit columns are stored in a separate table so we avoid page splitting, all inserts are sequential and we effectively force off row storage which means we don't retrieve the audit data unless we actually need it. Oh dear, sounds like a column store.

September 16, 2012 7:30 PM

Rick Willemain said:

.. just wonderful !!!  Thank you very much

September 10, 2015 7:15 PM

Leave a Comment

Privacy Statement