Can rebuilding indexes cause worse performance after the rebuild is finished?

Posted on

Question :

We have a customer database that is heavily fragmented – practically every table with more than 1000 pages has >95% fragmentation. Fill factors are set to sensible values, but page space usage is nowhere near to fill factor for most tables.

This is the result of no maintenance being performed on the database.

Rebuilding the indexes using Ola Hallengren’s IndexOptimize reduces fragmentation as expected. On the existing production hardware, performance of the application improves as expected. All the metrics I normally use – client statistics on heavy queries, profiler durations, read/write stalls, application logs and user perception – indicate performance is improved.

However, a new database server backed with Intel PCIe SSDs is showing the opposite of what we expect. Highly fragmented, the application performs well. After rebuilding indexes, the application performs badly. Some operations that took ~90s now take ~6mins. However, none of the other metrics appear to indicate that the system is going slower.

Is this something anyone else has experienced?

Answer :

Yes, rebuilding indexes (especially on SSD) can cause worse performance. Most high speed SSD prefer many, smaller block requests instead of fewer, larger requests. This is exactly the opposite pattern preferred by traditional, spinning rust.

Assume you have a highly fragmented B-tree. Because nothing is ordered on the disk, you will typically issue a lot of 8KB I/O requests to scan the tree. If you were to defragment the tree, then you can get up to 512KB in a single request. Those large requests will have higher latency on the SSD, because the SSD internally breaks it down to 8KB chunks (unlike a hard drive, which will issue a sequential I/O). For a great many cases: Higher disk latency = slower queries

All that being said, please do make sure you check that you are actually getting the same query plans that you were getting before the rebuild.

An finally: Unless you are low on space, why are you wasting your precious DBA time with index rebuilds when you run on SSD?

Highly fragmented, the application performs well. After rebuilding indexes, the application performs badly.

A probable cause is that the changed (presumably reduced) size of the structures after rebuilding means the optimizer is choosing a different plan. One of the primary inputs to the optimizer’s costing model is the number of pages each plan operator is expected to process.

Changing the number of pages in a structure can easily make the difference between the optimizer choosing a hash or merge join, versus a nested loops (with or without a spool) strategy. That’s just one example; costing differences can affect all aspects of plan choice, including the decision to use parallelism or not.

To make the sort of performance differences you are observing (and also considering the lack of physical I/O), this seems like the most likely explanation (assuming you can eliminate ‘bad’ parameter sniffing).

All that said, without details (ideally plans and detailed metrics for a single instance of the problem before and after rebuilding) the current question is arguably very opinion-based, which would make it off-topic for this site.

Leave a Reply

Your email address will not be published. Required fields are marked *