I have a table in SQL Server 2008 R2 with close to a billion rows with Composit PK Column. I want to change the Datatype of Columns from Date to Datetime. Two times ALTER TABLE zzz ALTER COLUMN yyy works, but it’s very slow. How can I speed the process up? I was thinking to copy the data to another table, drop, create, copy back and switching to simple recovery mode or somehow doing it with a cursor a 1000 rows a time but I’m not sure if those will actually lead to any improvement.
For large tables, it is nearly always preferable to move the data instead of altering the table. To get maximum speed, use one of the following patterns. Let us call the table you want to change
S (for source).
- Create an empty copy of
S. Let us call that
T. This is very fast because
Thas no rows
Thas a cluster index, remove all other indexes but the cluster
Tis a heap, remove all indexes
- Put the database in SIMPLE recovery mode (if you can, but be aware of the backup implications)
- Use one of the two options provided in the Copy Data section below
- Rebuild the indexes on
Sand rename to
In step 6 above, you will need to move data from
T at highest possible speed. The following are two options:
INSERT ... SELECT
- Use SSIS
Of these two, SSIS is the fastest. With
INSERT ... SELECT you are restricted to a single thread. However,
INSERT ... SELECT is much easier. See below for more details.
Option 1: INSERT … SELECT
T is a heap, do this:
INSERT INTO T WITH (TABLOCK) SELECT ... FROM S
This is minimally logged.
T has a clustered index, do this:
DBCC TRACEON (610) INSERT INTO T SELECT ... FROM S
This is also be minimally logged. The problem with this option is that you can only run one
INSERT statement in parallel. Because SQL Server does not do parallel
INSERT ... SELECT, this restricts you to around 40-80MB/sec – which is quite slow.
Option 2: Use SSIS
This option is a lot more trouble, but also faster. The idea is this:
- Drop all indexes on
- Create an SSIS package that takes a parameter that selects a subset of the rows in
S(using some filter that is supported by an index). The SSIS package then moves the rows to
TABLOCKhint in bulk mode
- Execute multiple copies of the SSIS package, each operating on their own, distinct subset of
S(so you don’t duplicate rows in
This is MUCH faster because the packages can run in parallel. With this technique, I can typically move tables at GB/sec.
Either way works it depends on your maintenance window.
For larger tables, I prefer the duplicate table approach because it eases locking/maintenance window problems.
I’d use a batch size of 50000 or 100000 too