Performance inprovement UDB DB2 v10.5.0.5“ [closed]

Posted on

Question :

We are using db2 UDB for our operational data store that is accessed via JAVA service to display information to front end [24*7]

we are facing a lot of performance and deadlock issues.

1)deadlock is something that i can understand,our ETL jobs are such that it reads the data from the source table and writes to the target and update the reads rows as completed within informatica mappings.As per my knowledge DB2 locks the only at ow level but we had instance where row level raises to table level lock.I am not able to understand why it turns into table level lock and eventually falls into deadlock there by failing the process.Do i need to decrease the commit level to 1000 in order to reduce the no of locks.

2) Also we found that the table overflow value is high and PCTFREE parameter was set to -1[means defaulted to zero].what is optimal value it PCTFREE can be set
[planning to re org but it is not possible

3) Is there any other list of things/parameters that can be set in-order to improve the performance ?

WE are looking at the application level SQL optimization but need a help with Admin eye View

Please advice

Answer :

I’m in a bit of a hurry, but I’ll try to answer question 1 to some extent.

  1. There are two major parameters that affects locking:

    a) LOCKLIST – This is the number of 4K blocks that db2 can use to keep track of locks

    b) MAXLOCKS – This is the percent of LOCKLIST that each transaction is allowd to use.

A lock is using bit of memory from the LOCKLIST, and if either the limit for a) or for b) is violated, db2 tries to free up memory by replacing rowlocks with a table lock. To realize that this can cause a deadlock concider:

Tx_a updates a row
Tx_b updates a huge amount of rows and excausts the LOCKLIST. db2 tries to escalates the lock to a table lock, but that is prevented by tx_a’s lock
Tx_a tries to update one of the rows locked by Tx_b -> DEADLOCK

Check whether it actually is deadlocks, the error message for -911 says something like “Deadlock or locktimeout”. Deadlock is rc=2, timeout is rc=68.

You can try to minimize this problem by either add more resources (increase LOCKLIST, MAXLOCKS) or by reducing the amount of locks that you acquire (commit more frequently). You may also want to have a look at the isolation level that your transactions use, also concider CURRENTLY COMMITED

For your question 2, there’s no such thing as an optimum value (otherwise I’m pretty sure that would be the default 🙂 It all depends on the workload

Question 3’s answer is pretty similar to answer 2, it depends. It is way beyond a stackexchange post to walk through all parameters that can affect performance, but I would start looking at statements that read the most rows in your system, and see if they can be improved. In many situations you can find indexes that improve the situation a lot.

Leave a Reply

Your email address will not be published. Required fields are marked *