Question :
Just seeking an expert/practical advise from DBA point of view where one of our application DB running on SQL 2014 after migration had old DB compatibility level i.e 100.(SQL2008)
From DEV point of view all the testing has been done and they dont see much diff and want to move to prod based on their testing.
In our testing ,For certain process where we see slowness like in SP’s we found the part of statement that was slow and added query traceon hint , something like below keeping compat to 120, which helps keeping performance stable
SELECT [AddressID],
[AddressLine1],
[AddressLine2]
FROM Person.[Address]
WHERE [StateProvinceID] = 9 AND
[City] = 'Burbank'
OPTION (QUERYTRACEON 9481);
GO
UPDATE- Editing question based on more findings-
Actually we found things getting worst for a table which calls scalar function within a computed column-
below is how that column looks
CATCH_WAY AS ([dbo].[fn_functionf1]([Col1])) PERSISTED NOT NULL
and part of query where it goes weird is somewhat looking like below
DELETE t2
OUTPUT del.col1
del.col2
del.col3
INTo #temp1
FROM #temp2 t2
INNER JOIN dbo.table1 tb1 on tb1.CATCH_WAY = ([dbo].[fn_functionf1](t2.[Col1])
AND t2.[col2] = tb1.[col2]
AND t3.[col3] = tb1.[col3]
AND ISNULL (t2.[col4],'') = ISNULL (tb1.[col4],'')
I know function is being called and is slow but the problem is with current compat i.e. 100 runs OK’ish slow but when changed to 120 it gets X100 times slow and if kept at 100 its X100 faster.
What is happening ?
Answer :
This regression in performance usually happens because you already had a bad query that was considered “fast enough” under the old compat, but is super slow under the new compat, due to the weird/bad way it was written.
I’d get rid of the 2 functions in your DELETE statement’s ON clause, by pre-computing them and storing them into your temp tables before the DELETE happens.
You could also try to rewrite the ISNULL parts, since ISNULL is also a function.
--new temp table to store table1
SELECT col2
,col3
,col4
,CATCH_WAY --this scalar function gets calculated here and saved into the temp table
INTO #tempTable1
FROM dbo.table1;
DELETE t2
OUTPUT del.col1
del.col2
del.col3
INTO #temp1
FROM #temp2 t2
JOIN #tempTable1 tb1
ON tb1.CATCH_WAY = ([dbo].[fn_functionf1](t2.[Col1]) --not a good idea
AND t2.[col2] = tb1.[col2]
AND t3.[col3] = tb1.[col3]
AND ISNULL (t2.[col4],'') = ISNULL (tb1.[col4],'') --not a good idea, the ISNULL "hides" the columns from the SQL Optimiser
I try my best to explain what can do in this situation base on your question information, and I hope help you.
When moving to the new CE, you can expect that some query execution plans will remain the same and some will change. Neither condition inherently suggests an issue.
Base on the Microsoft Link and white paper wrote by Joseph Sack (SQLskills.com)
I think you have to pay attention to this items that wrote in the white paper in Page 18
To review the document, please download the Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator Word document.
What Actions can You Take if You See a Plan Regression?
Consider the following actions if you have encountered performance degradation directly caused by the new CE:
• Retain the new CE setting if specific queries still benefit, and “design around” performance issues using alternative methods. For example, for a relational data warehouse query with significant fact-table cardinality estimate skews, consider moving to a columnstore index solution. In this scenario, you retain the cardinality estimate skew. However, the compensating performance improvement of a columnstore index could remove the performance degradation from the legacy version.
• Retain the new CE, and use trace flag 9481 for those queries that had performance degradations directly caused by the new CE. This may be appropriate for tests where only a few queries in a workload had performance degradation caused by the new CE.
• Revert to an older database compatibility level, and use trace flag 2312 for queries that had performance improvements using the new CE. This may be appropriate for tests where only a few queries in the workload had improved performance.
• Use fundamental cardinality estimate skew troubleshooting methods. This option does not address the original root cause. However, using fundamental cardinality estimation troubleshooting methods might address an overall estimate skew problem and improve query performance.
• Revert to the legacy CE entirely. This might be appropriate if multiple, critical queries encounter performance regressions and you do not have time to test and redesign around the issues.
Conclusion
I think base of these items you have to change some parameters,queries,set some flags and etc , i don’t think have shortcut solution like set special parameters and suddenly everything is ok!, I hope its help you.