Question :
I tried changing a column on a table with a few million records, changing a varchar column length from 100 to 250. After more than one and a half hours of execution time, whereupon monitoring the connection, it was almost all the time with a SUSPENDED state and PAGEIOLATCH_EX wait type, with very short intervals where the state changed to RUNNING.
I couldn’t tell if it took that long due to an index being recomputed, disk being slow (didn’t seem like it), or this was simply the expected behavior. Nor could I tell how long the task would take, so I just gave up and stopped the process.
Is there a way to know beforehand if there’s something to be done for the task to run on a shorter time interval, or at least to know how much is left to do when the task is running?
Edit
As suggested by @J.D, I’ve run a benchmark on the SQL server SSD and I got some below average results, at least less than I would expect from an SSD imho. Here are the results. So maybe it has something to to with the performance:
Answer :
Did you use the SSMS GUI to do the change?
The SSMS GUI isn’t smart. It will create a new table, etc.
If you use ALTER TABLE… ALTER COLUMN… directly, then it is likely that this change can be a meta-data only operation. I.e., no data movement and only a super-quick very restrictive lock while the meta-data operation is performed.
🙂 Experience is the thing that gives you insight into whether the task will be quick or not. Expected to be “quick” – varchar(100) to varchar(250). During production time? Never quick.
So, did you check there are no constraints you could disable?
Are there foreign keys on the table, or targetting the table?
Did you see if a trigger was on the table?
I made the same mistake recently – blocked the processes using the table because table lock. Process had been running 5 minutes, but rollback took 2 hours, and eventually we restarted the SQL Instance to recover from the rollback. Rollback is single-threaded… so that hurts.
Tread lightly in Production woods – restore a copy of prod and apply the change there if you can.
Yea those I/O results are fairly bad. Some SSDs are not great, but to be honest I have a mechanical HDD running behind my development SQL Server and it’s getting 33% faster read speeds than your SSD. By the way, only the first (SEQ1M) and third (RND4K) row results really matter when it comes to benchmarking I/O for SQL Server type of workloads.
It may not be the SSD itself that’s the issue but rather how it’s connected to your server. Hopefully you have an infrastructure team you can work with to diagnose the issue. I just went through the same thing with my development server and the setup was incorrect in a few ways including not properly connected with iSCSI. Once we fixed the setup, the mechanical HDD went from ~120 MB/s read speeds (particularly for SEQ1M) to almost ~200 MB/s. So it definitely makes a difference. Unfortunately I’m not well enough versed on hardware to advise more specifically, but you may find some luck asking for configuration / setup help on Server Fault.
I also agree with Dan that the change you’re doing should only be a meta-data change and be very simple / quick to accomplish, so long as you wrote the script and didn’t use the SSMS GUI like Tibor mentioned, so that’s strange too.