Users complain that system runs slow when mysqldump is in progress

Posted on

Question :

The MYSQL database (ibdata1) is of size 73 GB and is configured to run as a dedicated database server on Windows 2008 O/S for INNODB tables.
We are running the backup using mysqldump
mysqldump –skip-opt –quick –single-transaction –create-options –extended-insert –disable-keys –add-drop-table –complete-insert –set-charset — compress –log-error=Proddb0635.err -u root -pjohndoe Proddb> devNasdevNassqlbackupLIVEdbProddb0635.sql

The backup file Proddb0635.sql is stored on a separate server from the database server.
RAM is 12 GB.
INNODB Buffer Pool size is 6 GB.
Additional mem.pool is 32 MB.
Query Cache size is 2 GB
Net buffer length is 16 M
Max. packet size 1 GB.

mysql version is 5.0.67.

When backup is not running users are happy with the performance.

When backup is running INNODB Buffer pool hit rate is high close to 100%.
There are no pending reads or pending writes. innodb wait free is 0.
CPU usage is not high min 9% to max 15 %
Query cache hit rate is low about 40% with or without mysqlbackup running. Currently Windows Task Manager is displaying that 10GB of RAM is being used. Should I increase Query Cache with only 2GB of RAM available?
mysqlld-nt is taking 9.2 GB of RAM and mysqldump is taking 5 MB of RAM.
Alos, noted that the size of the dump file is same in presence or absence of –compress option.

SHould I decrease the iNNODB Buffer Pool size?

Thanks

Answer :

There is a known issue in Windows, that when you push a large file to another server all the memory ends up getting allocated to the System cache instead of the user processes. You can look in the Physical Memory (MB) section of task manager to see how much memory is allocated to the system cache.

This can be solved by backing up to a local disk, then having the remote machine pull that file.

Here are some thoughts I have about improving mysqldump performance, given your circumstances. Here’s your command:

mysqldump –skip-opt –quick –single-transaction –create-options
–extended-insert –disable-keys –add-drop-table –complete-insert
–set-charset –compress –log-error=Proddb0635.err -u root -pjohndoe
Proddb> devNasdevNassqlbackupLIVEdbProddb0635.sql

The first thing I notice is you’re redirecting output to a filesystem. It says ‘devNas’ so I’m going to assume this is Network Attached Storage. I’m a fan of NAS for backups, but it must be connected on a separate physical NIC from the production traffic. You may not be saturating the bandwidth, but they still compete. This is going to be more of a problem because of the –quick flag, since it flushes every row instead of holding it in memory.

The next thing I see is that you’ve invoked –compress. It looks like you’re running mysql locally since you didn’t use the -h switch. This may use local CPU that’s unnecessary in this context. Is –compress necessary? It only compresses data between the mysqldump client and mysql server, not the file contents.

Next I see you’re using the –single-transaction flag. This is going to cause extra CPU as it’s testing on every select as part of the mysqldump.

This has nothing to do with performance, but you’re using –disable-keys which only works on MyISAM (manual).

You may want to experiment with running mysqldump remotely from an offline host and moving the dump file to the NAS after completion to take as much of this operation out of band as possible.

OBSERVATION #1

Here is something to keep in mind when performing a mysqldump against InnoDB.

Whatever dirty pages exist in the InnoDB Buffer Pool must be flushed to disk first. A mysqldump will trigger the flushing of an InnoDB table that still has dirty pages in it.

There is a server option called innodb_max_dirty_pages_pct. The default value is 75 is MySQL 5.5 and 90 in versions of MySQL prior to 5.5. In a production environment it is OK to leave this number at the default value.

To see if you have a lot of dirty pages in the InnoDB Buffer Pool, run this:

SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_dirty';

BTW a page is 16K as show from this:

SHOW GLOBAL STATUS LIKE 'Innodb_page_size';

When it comes InnoDB and mysqldump, you can lower this number under two circumstances.

CIRCUMSTANCE 1 : Set it permanently to 0

Just add this to my.ini:

[mysqld]
innodb_max_dirty_pages_pct=0

This will keep the InnoDB Buffer Pool lean and mean. The step of flushing the InnoDB table that is being dumped will be quick because a few dirty pages as possible (perhaps 0) will need to be flushed before the mysqldump operates on it.

The only drawback is if you mysqldump from a highly trafficked DB, there may be a smaller increase in write I/O because of flushing out dirty page more frequently. You can determine if this is so without restartnig mysql by running this:

SET GLOBAL innodb_max_dirty_pages_pct = 0;

Leave the setting for 12-24 hours, if the write performance is acceptable, you are good to go. If not, set it back with:

SET GLOBAL innodb_max_dirty_pages_pct = 90;

CIRCUMSTANCE 2 : Set it to 0 about 1 hour before mysqldump

SET GLOBAL innodb_max_dirty_pages_pct = 0;

Run the mysqldump

SET GLOBAL innodb_max_dirty_pages_pct = 90;

OBSERVATION #2

You have –complete-insert as a mysqldump option. This will embed column names to every INSERT statament before the VALUES clause. Even with –extended-insert, on every batch of rows being inserted has the column names being sent into the mysqldump. You can reduce the amount of bytes sent to the mysqldump by removing –complete-insert.

RECOMMENDATION

If you have another Windows Server that can be setup as a Slave, do the mysqldumps from that slave rather than from the production machine.

Leave a Reply

Your email address will not be published. Required fields are marked *