DBCC SHRINKFILE – Necessary to run more than once to achieve shrink?

Posted on

Question :

We have a 2008R2 production DB in full recovery mode.

To produce a minimised dev version, for SQL Server 2014, the following steps are run:

RESTORE DATABASE BigDB FROM  DISK = N'C:ShareBigDB.bak' WITH  FILE = 1,  
MOVE N'BigDB_Data' TO N'C:DatabaseBigDB_Data',  
MOVE N'BigDB_Audit]' TO N'C:DatabaseBigDB_Audit',  
MOVE N'BigDB_Log' TO N'C:DatabaseBigDB_log.LDF',  NOUNLOAD,  REPLACE,  STATS = 5

alter database BigDB set recovery simple with NO_WAIT
alter database BigDB set compatibility_level = 120

go

-- {truncate / delete statements.. with no additional 'go'}

DBCC SHRINKFILE (N'[BigDB_Audit]' , 500)
DBCC SHRINKFILE (N'BigDB_Data' , 1000)
DBCC SHRINKFILE (N'BigDB_Log' , 1000)
DBCC SHRINKFILE (N'BigDB_Data' , 1000) -- Without this repeat, file remains same
DBCC SHRINKFILE (N'BigDB_Log' , 1000) -- Without this repeat, file remains same

I’ve observed that without the repeat of the shrinkfile on the main data file, and the log, one or both does not actually shrink.

Can this behaviour be expected, and if so why?

Answer :

There is likely not enough time passed between your truncates/deletes and the shrink operations. While I droned on and on and on about why you shouldn’t shrink and what it means to have a too big log file in another answer, here’s a situation where it makes sense to shrink.

I would suggest either building a pause in between the truncates and deletes or simply add a CHECKPOINT command within that database between the deletes and shrink. That will likely give you the desired results without the second shrink.

“Can this behaviour be expected, and if so why?”

Yes.

Ghost records can affect shrink. They can be generated by deletes, but not truncates. The ghost cleanup task happens in background.

The count of remaining records can be measured using a dmv:

select sum(ghost_record_count)
from sys.dm_db_index_physical_stats(db_id(),null,NULL,NULL,'DETAILED')

Articles about ghost records, by Paul Randal:

https://www.sqlskills.com/blogs/paul/inside-the-storage-engine-ghost-cleanup-in-depth/

https://www.sqlskills.com/blogs/paul/ghost-cleanup-redux/

Solution #1: DBCC FORCEGHOSTCLEANUP

dbcc forceghostcleanup('YourDBName')

This is the fastest way to minimise ghost records, ahead of doing a shrink.

This is an undocumented command, and could be a resource intensive task, so it should not be used on a host with production workload without some appropriate testing.

Solution #2: Wait for ‘some time’

This works, but may not be durable. There is no guarantee of when the ghost cleanup task will exhaust its workload. Nonetheless, it may be preferable to FORCEGHOSTCLEANUP (e.g. being run on a host with other workloads)

(In my tests, a one minute WAITFOR was insufficient, vs. 6 seconds FORCEGHOSTCLEANUP to achieve a guaranteed shrink target)

Checkpoint Is Not a Solution

Based on my observations, the number of ghost records may either marginally reduce, or even increase, using a checkpoint command. The following shrinkfile statements, for all DB files, may not all achieve their target.

Dropped Pages are Not a Factor

This page describes page level operations (eg. drop/truncate table) https://technet.microsoft.com/en-us/library/ms177495(v=sql.105).aspx

It makes reference to sys.allocation_units, which can measure dropped pages.

In my tests, the number of dropped pages – whether very high, or zero – had no effect on the result of the shrink.

Shrink with emptyfile?

It’s possible this will achieve the same result, but I’ve not tested this

I Find Shrinking in Small Increments Works Best for Me

I have had issues like this many times. I’ve found that the SHRINK may work “as is” but it’s very inconvenient waiting for those super long pauses or failures. And my additional concern is how the SHRINK may affect production databases. I’ve used the following code to “chunk” the shrinkage in smaller more manageable amounts–as show in Listing A.

Code wise in Listing A, @next refers to the largest database file size (MB) and @last refers the minimum size (MB) you want to go to. You’ll need the correct database name and associated datafile name as well that you want to shrink. And, select @next=@next-50 specifies 50 MB chunks. You can increase or decrease that amount to your liking. I put SELECT @DELAY=’WAITFOR DELAY ”000:00:10”’ in there as well to avoid hitting production databases too often. Increase or decrease this 10 second amount as desired. My apologies for the code, it’s over ten years old and not too elegant but it always seems to work for me.

Running the code won’t start the shrink process but rather print out the shrink T-SQL statements. Cut and paste those resulting statements, from running the code in Listing A, into your query window.

Listing A: T-SQL Code to shrink database files throughout the day

declare @next int, @last int, @DELAY VARCHAR(255), @SHRINK VARCHAR(1000)
SELECT @DELAY='WAITFOR DELAY ''000:00:10'''
select @next=93000, @last=50000
while(@next >=@last)
begin
--print @next
SELECT @SHRINK ='use [MyDatabase] DBCC SHRINKFILE (N''MyDatabase_Data'','+convert(varchar(255),@next)+ ')'
print @shrink
print @DELAY
print 'go'
select @next=@next-50
end

An example of the first two lines, of multiple lines, of output that you’d want to run would be:

use [MyDatabase] DBCC SHRINKFILE (N'MyDatabase_Data',93000)
WAITFOR DELAY '000:00:10'
go
use [MyDatabase] DBCC SHRINKFILE (N'MyDatabase_Data',92950)
WAITFOR DELAY '000:00:10'

Note:

Don’t worry if you overshoot maxes, or undershoot your mins. DBCC SHRINKFILE won’t throw an error, and will quickly pass to the next line of code. However, if you underestimate the beginning maximum file size, mine is 90000, DBCC SHRINKFILE will have to work a lot harder to remove any margin in excess of 50 MB–for my example anyway. Also, use a smaller WAITFOR DELAY if it’s a non-production database–(i.e. something like WAITFOR DELAY ‘000:00:01’).

Leave a Reply

Your email address will not be published. Required fields are marked *