New SQL Server with slow Index Spool (Eager Spool) and old server used index

Posted on

Question :

So yesterday I posted this question regarding that we have upgraded from SQL Server 2008 to SQL Server 2017 and that we excperience some performance problems.

I got the answer to try updating statistics which worked for one database, but not the others. Should I set compability level from 2008 to 2017? At the current time, compatibility level has been left at 100 (SQL Server 2008).

Old server was a physical server, the new server is a VM (Hyper-V).

After the upgrade the execution plan looked different from the old one
and it seems to use Index Spool (Eager Spool) instead of using index. Which means that execution time went from 3 seconds on the old server to 1 minute 50 seconds on the new server.

The only thing I notice that is different from the database that does not have Eager Spool with same query is that the table rows are less, it’s 13.000.000 rows on the non-working vs 70.000 rows on the working with correct execution plan.

The plans below are anonymized but I hope there’s enough information.

Here is the plan with Eager Spool vs The other databas with Index

I have tried

  • Updating statistics
  • Changing cost threshold for parallelism and max degree of parallelism accordingly.
  • Double check indexes if they’re as before.
  • I’ve looked at READ_COMMITTED_SNAPSHOT.
  • I’ve checked the NUMA configuration.

Another problem is that wait types are high

LCK_M_IS, 
CXPACKET, 
LCK_M_X, 
ASYNC_IO_COMPLETION, 
LCK_M_IX, 
PAGELATCH_EX, 
BACKBUFFER, 
TRACEWRITE.

I am starting to run out of ideas.


Max memory is set to leave 4GB for OS, I noticed yesterday that power plan on the host machine was set to balanced but changed it to high perf, but still same problem.

On received advice to update statistics:
Seems to have worked on one of the databases, however I still have a problem on all the other databases.

The old server was a physical server with 8 cores and 32GB RAM. The new server is a VM running on Hyper-V with 4 cores and 64GB Ram.

Answer :

It’s possible having double the amount of RAM in the new server is causing the query optimizer to choose a plan that performs worse than expected.

You could try using DBCC OPTIMIZER_WHATIF1 to see what happens if you have 20 cores and only 32 GB of RAM, like the old system. I wrote a blog post showing how that works on sqlserverscience.com.

To see the effect on plan choice with 32GB of RAM, you could use this command:

dbcc optimizer_whatif ('MemoryMBs', 32768);

You’ll need to evict the plan for the query from the plan cache, or force a new plan to be compiled using WITH RECOMPILE or OPTION (RECOMPILE) to see if that causes SQL Server to choose the “better” plan.

If you see a better plan after using dbcc optimizer_whatif, you might consider either re-factoring the query to allow the optimizer to make a better choice, or you could create a plan guide.

You may be interested in this question, and the answers about the differences in execution plans on two servers with different resources.

FYI for future visitors: the question states that most of the databases have been kept at compatibility level 100; this rules out an array of potential issues related to cardinality estimator changes in SQL Server 2014+.


1 – be aware, this is an unsupported DBCC command, and should only be used for testing purposes.

When upgrading SQL Server there are a number of tasks that should be completed, to ensure a “smooth”(er) transition.

A list that I use can be found here. This list includes tasks like running CHECKDB, checking compatibility levels, refreshing views, and updating statistics.

In your case, stale statistics appears to have been your problem. The result of this is that the cardinality estimator is unable to make accurate estimations for predicting data distribution and rows returned, and thus produces a different and potentially sub-optimal execution plan.

Edit based on updated question

You also mention about updating the Compatibility Level. The above links provide information with regards to how, and why the Compatibility Level should or could be adjusted. One of the main reasons for changing it is that v12 (SQL Server 2014) provided a major upgrade in the Cardinality Estimator. The best way to determine if this will make a difference for you is to test it, preferably before your upgrade has gone live. This may help you in generating similar execution plans, post-upgrade.

Leave a Reply

Your email address will not be published. Required fields are marked *