Question :
I have a 4 Tb SQL Server 2008 R2 database. I’m going to move this DB to a new server with a new release of SQL Server 2012. (the application does not support SQL Server > 2012, but that is not the subject of my question).
Before that, I’m going to move almost all objects from the main datafile of 4To to multiple datafiles on the same filegroup so I can get back the free space on the 4Tb file.
After moving objects, free space take like forever to be reclaimed with a dbcc shrink file with blocs of 256Mo.
What’s the fastest way to get that space back to the OS?
Is there a way to use a backup restore to shrink the file?
Answer :
Is there a way to use a backup restore to shrink the file?
If this means you are fine with it being an offline operation, could you combine the shrink with the move and do away with moving the data into other files in the file group (unless you also have other reasons to want to do that) by migrating the data to a new database during a maintenance window:
- Script out the schema of the current DB.
- Create the new DB in the 2012 instance using that, but just the tables and their clustered indexes. Leave out foreign key constraints (this makes the data copy less faf), non-clustered indexes (this makes bulk data import faster), triggers (you don’t want these firing in the data copy step), views (these may have index hints that reference indexes you have skipped created) and procedures & functions (as with views).
- If you want a new file/filegroup structure in the new DB, arrange this now.
- Copy the data in bulk to the new DB. Use SSIS or script this up using a linked servers setup rather than doing this more ad-hoc – that way you can tweak & repeat the process easily in case something goes wrong or as a test before the real event.
- Create all the non-clustered indexes, then foreign key constraints, then triggers, then views, procedures & functions, that you skipped in step 2. If you created a multi-file arrangement in step 3, take care to make sure the indexes go in the place you want them.
- Test the new DB.
- Switch app over to using the new DB and put the old one out of reach (rename, and/or make read-only, but don’t drop just in case you need to roll back to it if problems are found early).
You should now have a complete copy of the DB in the SQL2012 instance and you have not had to modify the 2008 instance’s copy at all (no juggling data between files and such) so it should be both faster and safer. As you are not modifying the source DB in any way you have an automatic roll back plan (just don’t use the new DB if something fails, try again later). As you are importing the data newly into the new DB there is nothing to shrink – it all went into the target files the size it should be.
I would strongly recommend doing this with a copy of the source DB in a test environment first, so you can verify the process and run your applications against the new DB to make sure there are no unexpected regressions‡ before you do this in production.
‡ performance drops due to differences in query planner estimates that might require index tweaks etc. – though really as the application is officially supported on SQL2012 one would hope your supplier has dealt with anything that might crop up it is still worth being paranoid and checking
Another option that may be much less faf: if creating a .bacpak using a recent enough version of SSMS supports SQL instances as old as 2008 then you may be able to just do this against the source database and restore into the SQL2012 instance. This will effectively do the process I described above.
DBCC SHRINKDATABASE
The fastest way I find to shrink a data file is to first use the SHRINK DATABASE command to see what it takes off the top. It will usually do this quickly–but not always. Keep an eye on blocking locks while you’re doing this.
caveat
SHRINKDATABASE is impartial and will attempt indiscriminately attempt to shrink any file in the database. If you don’t appreciate this behavior then don’t run this one.
DBCC SHRINKFILE
After you’ve used SHRINKDATABASE you’ll need to use SHRINKFILE. Here I find it’s important to “chunk” the shrinks in small sizes. Also, I use a T-SQL DELAY to allow the database to gain back some resources to do other activities as needed. Using the script below you’ll need to enter 1.> your database name, 2.> your logical file name of the file, 3.> and the upper and lower bounds, in MB, for your upper database size in MB and 4.> your lower database size in MB. You’ll notice that it shrinks in 100 MB increments but, again, you’ll need to find the size that’s right for you. Warning: larger increments aren’t necessarily more speedy. Don’t worry “too much” if your lower bounds is too low–it won’t/can’t shrink past it. (You’ll have to forgive my code, it’s over 10 years old but still get’s the job done).
--Cut and paste the code that this T-SQL generates and run THAT code against your DB
declare @next int, @last int, @DELAY VARCHAR(255), @SHRINK VARCHAR(1000)
SELECT @DELAY='WAITFOR DELAY ''000:00:01'''
select @next=102000, @last=50000
while(@next >=@last)
begin
--print @next
SELECT @SHRINK ='use [<your DB name>] DBCC SHRINKFILE (N''<enter logical file name>'','+convert(varchar(255),@next)+ ')'
print @shrink
print @DELAY
print 'go'
select @next=@next-100
end