Query tuning to meet timeout settings [closed]

Posted on

Question :

We have an web app with a 30 second time out on search screens. I’ve analyzed the original query and added a missing index for one of the query’s 9 joins. Other things like expected and actual row-counts are good. No table or index scans. But it still didn’t come in under 30 seconds. It was close.

I rewrote the query by decomposing into smaller steps with temp tables and no more than 2 joins per step. It screams in at 5 seconds now. All good.

My specific question is this a normal/common approach to solving this kind of problem? Why do I have to help out the optimizer in this way?

Answer :

Is this a normal/common approach to solving this kind of problem?

Yes it is a common approach (though not common enough, perhaps).

Why do I have to help out the optimizer in this way?

Primarily because the optimizer is cost-based, and relies on cardinality estimations (expected number of rows) as a main input to that calculation.

Assuming the right statistics are available, and they are reasonably up-to-date, estimates on base relations may be quite accurate. These derived estimates are the input to the next operation (for example, a join), which produces another cardinality estimation as its output, and so on. The nature of this process (and the statistical inputs) means that the accuracy of the estimations will tend to degrade as derivations continue up the tree. Poor estimations lead to poor cost-based choices, and other side-effects such as an insufficient memory reservation for a hashing or sorting operation.

Materializing intermediate results (using temporary tables, or another method) gives the optimizer a new starting point, perhaps with new statistics automatically built on the materialized data, and maybe even with helpful indexes. These considerations mean that simpler queries (using operations with accurate estimation logic) tend to produce higher-quality execution plans.

Might be a RAM size issue. By splitting up the process into smaller chunks, it doesn’t have to happen simultaneously, i.e. RAM used in earlier steps can be freed up again.

I’m not familiar enough with how other people are solving those kinds of problems (for example you could in theory force an execution plan, but i rarely do that), so i cannot say if it’s normal. However, I usually have a tendency towards splitting things up into separate steps inside a uSP and storing sub-results in temp tables or table variables, especially when the join results in relatively small chunks.

Leave a Reply

Your email address will not be published. Required fields are marked *