THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Argenis Fernandez

DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

This blog has moved! You can find this content at the following new location:

https://www.0xsql.com/2013/06/16/dbcc-checkdb-on-vvldb-and-latches-or-my-pain-is-your-gain/

Published Sunday, June 16, 2013 11:32 PM by Argenis

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Dave said:

Your GO 1000000 is 7 digits so it is making 1 million rows, not 10 million

June 17, 2013 11:53 AM
 

Argenis said:

@Dave - you're absolutely correct. 10 was a typo - I only inserted 1 million rows.

June 17, 2013 12:10 PM
 

Joseph said:

Nice write-up...Thanks!

So, Argenis, what do you recommend?  Dropping this NCI on the offline database, before running the check?

Is the worse-case scenario (on the prod copy), if corruption does turn up one day, is to drop and recreate it?

-Joseph

June 17, 2013 3:20 PM
 

Argenis said:

@Joseph - I would probably take a snapshot of the database, remove the NCI, run DBCC CHECKDB on that, and then revert to the snap and run another DBCC CHECKDB on the whole thing, and not care how long that takes.

June 17, 2013 3:24 PM
 

Joseph said:

Very nice, thanks!

-J

June 17, 2013 3:54 PM
 

Argenis said:

@Joseph indeed, if corruption is found, the easiest way to fix it would be to drop the index and recreate it. But this might not be a good strategy for your business. Relying on automatic page repair on mirroring or availability groups is a better idea IMO.

June 17, 2013 4:48 PM
 

Chris Adkin said:

My understanding is that filtered indexes and sparse columns were 'Invented' to a large degree to support Sharepoint. Specifically, Sharepoint uses the entity attribute model in part which despite its evils works well with filtered indexes. Microsoft can be picky from a support / de-support perspective if you play around with system objects in Sharepoint databases, I don't know if dropping a filtered system index, would trigger de-support. However, this is something you may wish to bring up in your connect item as it ** might ** help your cause and that of the wider SQL community.

June 20, 2013 9:07 AM
 

Argenis said:

@Chris thanks, that's a great point.

June 20, 2013 11:17 AM
 

James said:

Nice post. How did you work out which table CHECKDB was stalling on? Was it just obvious in this case due to the table's size?

October 13, 2013 3:47 PM
 

Argenis said:

@James - indeed it was pretty obvious which table was the problem given its size. I had confirmed earlier that this was the case by dropping that table on a test environment and comparing DBCC CHECKDB execution times.

October 21, 2013 4:34 PM
 

BradC said:

Here is a query (just a slightly modified version of Paul's) that shows all indexes containing a calculated or sparse field:

SELECT s.name as SchemaName,

   o.name as TableName,

   i.name as IndexName,

   c.name as ColumnName,

   ic.*

FROM sys.columns c

JOIN sys.index_columns ic

   ON ic.object_id = c.object_id

   AND ic.column_id = c.column_id

JOIN sys.indexes i

   ON i.object_id = ic.object_id

   AND i.index_id = ic.index_id

JOIN sys.objects o

   ON i.object_id = o.object_id

JOIN sys.schemas s

   ON o.schema_id = s.schema_id

WHERE c.is_computed = 1

  OR c.is_sparse = 1;

January 30, 2014 5:57 PM
 

Manuj Bahl said:

Sparse Column does not seem to play a role here, its the filtered index that is causing the problem on my server.

USE [master]

SET NOCOUNT ON

DROP DATABASE [Playground]

GO

CREATE DATABASE [Playground];

GO

USE [Playground];

GO

CREATE TABLE [dbo].[FilteredIndexTest]

   (

     IsProcessed BIT DEFAULT ( 0 )

   )

GO

INSERT  INTO [dbo].[FilteredIndexTest]

       ( IsProcessed )

VALUES

       ( 0 );

GO 10000

INSERT  INTO [dbo].[FilteredIndexTest]

       (

         IsProcessed

       )

SELECT

   [FilteredIndexTest].[IsProcessed]

FROM

   [dbo].[FilteredIndexTest]

GO 7

DECLARE @StartDateTime DATETIME

DECLARE @EndDateTime DATETIME

SELECT

   @StartDateTime = GETDATE()

DBCC CHECKDB('Playground') WITH ALL_ERRORMSGS,NO_INFOMSGS;

SELECT

   @EndDateTime = GETDATE()

SELECT

   DATEDIFF(ms, @StartDateTime, @EndDateTime) TimeTaken

CREATE NONCLUSTERED INDEX [IXF_FilteredIndexTest_IsProcessed]

ON [dbo].[FilteredIndexTest] (IsProcessed)

WHERE IsProcessed = 1;

SELECT

   @StartDateTime = GETDATE()

DBCC CHECKDB('Playground') WITH ALL_ERRORMSGS,NO_INFOMSGS;

SELECT

   @EndDateTime = GETDATE()

SELECT

   DATEDIFF(ms, @StartDateTime, @EndDateTime) TimeTakenWithFilteredIndex

DROP INDEX [FilteredIndexTest].[IXF_FilteredIndexTest_IsProcessed]

CREATE NONCLUSTERED INDEX [IX_FilteredIndexTest_IsProcessed]

ON [dbo].[FilteredIndexTest] (IsProcessed)

SELECT

   @StartDateTime = GETDATE()

DBCC CHECKDB('Playground') WITH ALL_ERRORMSGS,NO_INFOMSGS;

SELECT

   @EndDateTime = GETDATE()

SELECT

   DATEDIFF(ms, @StartDateTime, @EndDateTime) TimeTakenWithNCIndex

May 29, 2014 8:50 AM
 

Mark Umbach said:

I have an issue with 'not so big' a database.  Although I inherited it, it needs some work.  The CHECKDB took longer and longer in production so I just made it Physical_only and restored the backup to another server.  I easily determined the table and it takes the 1.1B row table 11 hours for a TABLECHECK. I am seeing the DBCC_OBJECT_METADATA latch 100% on CXPACKET waits.  The DDL is as follows:

CREATE TABLE [dbo].[fCMPPDistributorShipment](

[dmDateID] [int] NOT NULL,

[dmCustomerID] [int] NOT NULL,

[dmStoreID] [int] NOT NULL,

[dmItemID] [int] NOT NULL,

[dmItemSaleableProductID] [int] NOT NULL,

[dmSourceSystemID] [int] NOT NULL,

[SalesRecordNumber] [int] NOT NULL,

[QuantitySalesVolume] [decimal](18, 2) NULL,

[AmountIndirectReported] [decimal](18, 2) NULL,

[LoadBatchID] [bigint] NULL,

[HistoryRecId] [bigint] NOT NULL,

[DateTimeInsertedUTC] [datetime] NULL,

[DatetimeUpdatedUTC] [datetime] NULL,

[DatetimeDeletedUTC] [datetime] NULL,

[ModifiedUTC]  AS (case when [DatetimeDeletedUTC]>=isnull([DatetimeUpdatedUTC],CONVERT([datetime],'1900-01-01',(20))) AND [DatetimeDeletedUTC]>=isnull([DatetimeInsertedUTC],CONVERT([datetime],'1900-01-01',(20))) then [DatetimeDeletedUTC] when [DatetimeUpdatedUTC]>=isnull([DatetimeDeletedUTC],CONVERT([datetime],'1900-01-01',(20))) AND [DatetimeUpdatedUTC]>=isnull([DatetimeInsertedUTC],CONVERT([datetime],'1900-01-01',(20))) then [DatetimeUpdatedUTC] else [DatetimeInsertedUTC] end) PERSISTED,

CONSTRAINT [PK_fCMPPDistributorShipment] PRIMARY KEY NONCLUSTERED

(

[dmSourceSystemID] ASC,

[HistoryRecId] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

SET ANSI_PADDING OFF

GO

/****** Object:  Index [CIX_fCMPPDistributorShipment_dmDateID]    Script Date: 7/27/2015 2:06:50 PM ******/

CREATE CLUSTERED INDEX [CIX_fCMPPDistributorShipment_dmDateID] ON [dbo].[fCMPPDistributorShipment]

(

[dmDateID] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

GO

Anything obvious to you?

July 27, 2015 3:23 PM
 

Argenis said:

Hey Mark, I would think that persisted computed column is causing you grief. Have you been able to confirm?

July 28, 2015 1:41 PM
 

Satya said:

@BradC, I see you have given a query to get the list of columns that are computed and sparse. Is the DBCC operation operation impacted by computed columns as well in addition to the sparse columns as specified by Argenis in this article?

January 31, 2016 8:56 PM
 

Hello SMEs said:

I have one 2.5 TB database. There are computed, sprase columns or filtered indexes.

Normally checkdb take 3 hrs but sometimes it runs for 20+ hrs which no blocking etc. wait resouce is DBCC_CHECK_AGGREGATE...what should I look for??

August 30, 2017 10:19 PM
 

John said:

I certainly appreciate your stuff provided in the blogs.<a href="http://www.slowdownfast.com/five-myths-about-life-and-how-you-should-be-living-it/ "> Divorce </a>

January 1, 2018 12:05 AM
 

John said:

I certainly appreciate your stuff provided in the blogs. http://www.slowdownfast.com/five-myths-about-life-and-how-you-should-be-living-it/

January 1, 2018 12:06 AM
 

online hr software said:

Well, it’s a nice one, I have been looking for. Thanks for sharing such informative stuff. https://www.cloudb2bsolutions.co.uk

January 18, 2018 8:26 PM
 

wholesale snap frames said:

I thought haven’t read such distinctive material anywhere else on-line. http://clipsnapframe.bravesites.com/economical-techniques-of-advertising

January 22, 2018 8:43 PM

Leave a Comment

(required) 
(required) 
Submit
Privacy Statement