Change Data Types large Live table

Posted on

Question :

I have a table in SQL Server 2008 R2 with close to a billion rows with Composit PK Column. I want to change the Datatype of Columns from Date to Datetime. Two times ALTER TABLE zzz ALTER COLUMN yyy works, but it’s very slow. How can I speed the process up? I was thinking to copy the data to another table, drop, create, copy back and switching to simple recovery mode or somehow doing it with a cursor a 1000 rows a time but I’m not sure if those will actually lead to any improvement.

Answer :


For large tables, it is nearly always preferable to move the data instead of altering the table. To get maximum speed, use one of the following patterns. Let us call the table you want to change S (for source).

  1. Create an empty copy of S. Let us call that T
  2. ALTER TABLE on T. This is very fast because T has no rows
  3. If T has a cluster index, remove all other indexes but the cluster
  4. If T is a heap, remove all indexes
  5. Put the database in SIMPLE recovery mode (if you can, but be aware of the backup implications)
  6. Use one of the two options provided in the Copy Data section below
  7. Rebuild the indexes on T
  8. Drop S and rename to T

Copy Data

In step 6 above, you will need to move data from S to T at highest possible speed. The following are two options:

  1. Use INSERT ... SELECT
  2. Use SSIS

Of these two, SSIS is the fastest. With INSERT ... SELECT you are restricted to a single thread. However, INSERT ... SELECT is much easier. See below for more details.


If T is a heap, do this:


This is minimally logged.

If T has a clustered index, do this:


This is also be minimally logged. The problem with this option is that you can only run one INSERT statement in parallel. Because SQL Server does not do parallel INSERT ... SELECT, this restricts you to around 40-80MB/sec – which is quite slow.

Option 2: Use SSIS

This option is a lot more trouble, but also faster. The idea is this:

  1. Drop all indexes on T
  2. Create an SSIS package that takes a parameter that selects a subset of the rows in S (using some filter that is supported by an index). The SSIS package then moves the rows to T using the TABLOCK hint in bulk mode
  3. Execute multiple copies of the SSIS package, each operating on their own, distinct subset of S (so you don’t duplicate rows in T)

This is MUCH faster because the packages can run in parallel. With this technique, I can typically move tables at GB/sec.

Either way works it depends on your maintenance window.

For larger tables, I prefer the duplicate table approach because it eases locking/maintenance window problems.

I’d use a batch size of 50000 or 100000 too

Leave a Reply

Your email address will not be published. Required fields are marked *