MS-SQL-Server 2019 is slower than SQL-Server 2008

Posted on

Question :

Since we updated from SQL Server 2008 to 2019, the queries have become much slower.

The following query ran for ten minutes on version 2008. On version 2019 it runs for over 30 minutes.

Until yesterday, we always limited the WHERE-clause: where date > ‘20180101’

To make the script at least a bit faster, I narrowed it down from January 2019: where date > ‘20190101’

Now it’s even slower than before. No joke! Because one year less has to be evaluated, the running time has increased by a few minutes.

I have checked all the indexes in detail and I think we have created the correct indexes.
But I don’t know if this is really an index problem, because in version 2008 we had the same indexes.
In version 2019 I even deleted and re-created a few important indexes. But that didn’t help at all.

MS-SQL Server 2019 got very slow and I don’t know why.

Answer :

The database compatibility level controls the behavior of the query optimizer.

So the recommended upgrade process is to keep the upgraded database on the lower database compatibility level initially, while you use Query Store to capture a performance baseline for the database.

See

enter image description here

Change the Database Compatibility Level and use the Query Store

From the query plan you provided it runs a minute:

enter image description here

But it compiles 33 minutes (1992254 ms ~= 1992 s ~= 60 * 33 = 33 minutes):

enter image description here

So basically you had a problem on 2008 already. It’s just exaggerated in 2019. Never experienced such an extreme case in my life, but I would try to simplify the views you are using and drop not used indexes on underlying tables. Or maybe split the query: first select view into temp table and then use the temp table in your original query.

Looking at your query execution plan it doesn’t look like you database contains any indexes or they where disabled during/by the migration process.

Based on the statement in your question:

With this join, I create an index on the F_Skonto table with the three columns gueltig_ab, gueltig_bis and CD_Kunde. Is that correct?

…I’m not sure if you are aware of what an index is. The answer to your question is:

No, a JOIN does not create an index. A JOIN can cause an existing index to be used to speed up the query.

Your statement may have been based on your knowledge of the English language and thus be misleading. I’ll asume you know that an index is basically a Table of Contents that the the SQL Server Database Engine can use to speed up queries by knowing where to look for the data.

In that case we may be looking at the possible issue that the indexes are all disabled. If an index is disable it can’t be use for speeding up queries.

Execute the following query to see if you have indexes which have been disalbed:

SELECT s.name                  AS Schema_name,
       o.name                  AS Table_Name,
       i.name                  AS Index_Name,
       i.is_disabled           AS Index_Disabled
FROM   sys.objects             AS o
       JOIN sys.schemas        AS s
            ON  s.schema_id = o.schema_id
       JOIN sys.indexes        AS i
            ON  i.object_id = o.object_id
        JOIN sys.sysindexes as si
                   ON  si.[id] = i.object_id
            AND si.indid = i.index_id
WHERE  1=1 
       AND s.name != 'sys'           -- Exclude sys items
       AND i.is_disabled = 1         -- Show only disabled indexes
ORDER BY
1,2,3       
       

This will produce a similar output to this:

+----------------+------------+--------------------+----------------+
|  Schema_name   | Table_Name |     Index_Name     | Index_Disabled |
+----------------+------------+--------------------+----------------+
| HumanResources | Department | AK_Department_Name |              1 |
+----------------+------------+--------------------+----------------+

If any indexes have been disabled in your tables, then you will receive a list. With this list you can then go ahead and rebuild the indexes which will enable them again and which might result in a faster SELECT part of your INSERT statement.

If no disabled indexes are returned by the above statement, then we will check if you have any indexes at all by running the same statement without the AND i.is_disabled = 1 part.

Like this:

SELECT s.name                  AS Schema_name,
       o.name                  AS Table_Name,
       i.name                  AS Index_Name,
       i.is_disabled           AS Index_Disabled
FROM   sys.objects             AS o
       JOIN sys.schemas        AS s
            ON  s.schema_id = o.schema_id
       JOIN sys.indexes        AS i
            ON  i.object_id = o.object_id
        JOIN sys.sysindexes as si
            ON  si.[id] = i.object_id
            AND si.indid = i.index_id
WHERE  1=1 
       AND s.name != 'sys'           -- Exclude sys items
ORDER BY
1,2,3       
       

In my AdventureWorks2012 database provided by Microsoft, this will produce a list which looks like this:

+----------------+--------------------------+------------------------------------------------+----------------+
|  Schema_name   |        Table_Name        |                   Index_Name                   | Index_Disabled |
+----------------+--------------------------+------------------------------------------------+----------------+
| dbo            | AWBuildVersion           | PK_AWBuildVersion_SystemInformationID          |              0 |
| dbo            | DatabaseLog              | NULL                                           |              0 |
| dbo            | DatabaseLog              | PK_DatabaseLog_DatabaseLogID                   |              0 |
| dbo            | ErrorLog                 | PK_ErrorLog_ErrorLogID                         |              0 |
| dbo            | ufnGetContactInformation | NULL                                           |              0 |
| HumanResources | Department               | AK_Department_Name                             |              1 |
| HumanResources | Department               | PK_Department_DepartmentID                     |              0 |
| HumanResources | Employee                 | AK_Employee_LoginID                            |              0 |
| HumanResources | Employee                 | AK_Employee_NationalIDNumber                   |              0 |
| HumanResources | Employee                 | AK_Employee_rowguid                            |              0 |
| HumanResources | Employee                 | IX_Employee_OrganizationLevel_OrganizationNode |              0 |
| HumanResources | Employee                 | IX_Employee_OrganizationNode                   |              0 |
| HumanResources | Employee                 | PK_Employee_BusinessEntityID                   |              0 |

If the above statement does not provide any output, then your tables don’t have any indexes. We will have to start creating some…

While it’s true there are no indexes on the underlying tables, the query plan shows the vast majority of resources are being used for the insert to the destination table and its 3 indexes.

One way to improve performance of large/ bulk inserts is to disable indexes prior to the insert and rebuild them after.
If you have a unique clustered primary key (or unique index(es)) you could keep them as non-clustered constraints for the INSERT to ensure validation rules are maintained if you’re not confident data being inserted will be valid.

Following the INSERT, drop these temp versions (if used) and rebuild original indexes.

try to run the query setting up the legacy cardinalty extimation (ON):

enter image description here

as a second test try to turn on Query optimizer fixes (ON)

As per your execution plan shared on Brent’s paste the plan, I can only see table scan happening. If you say your indexes are in place, I doubt they are online or being used. You should start your investigation from there.

Leave a Reply

Your email address will not be published.