Importing a large mysql dump while replication is running

Posted on

Question :

So we have a simple master/slave mysql replication running between two CentOS Servers.

The master has multiple databases. eg..

  • Database1
  • Database2
  • Database3
  • The issue is we have a mysql dumpfile of a new 60GB database (Database4).

    What’s the best way to import Database4 without breaking replication?

    I was thinking we could stop replication, and import the mysqldump onto both master and slave. Then restart replication, but was hoping there was an alternate way that would minimize downtime.

    Answer :

    I don’t think there needs to be any downtime on the Master. See how this feels.

    1. Configure it so that you have Master-Master. That is, turn on the binlog and a few flags on the Slave so that it also acts like a Master; plus do CHANGE MASTER on the Master so that it is also a slave. (But no traffic will yet flow from the Slave to the Master.)

    2. Load the new database on the Slave. It will replicate to the Master (and clog up replication in that direction).

    I think that is all. You could turn off M-M and leave it just M-S, but I prefer keeping “M-M, but writing only to one M”.

    (Caveat: I have not tried anything close to this.)

    In my view your thoughts are in the right direction @user125340 – I don’t see a point in loading a new 60GB database to master only to replicate and have huge binlogs & network traffic etc.

    You could do something along the lines of (Slave first):

    On the SLAVE:

    • mysql --init-command="SET SESSION sql_log_bin=0;" -uuser -p -BNe "CREATE DATABASE db_name"
    • mysql --init-command="SET SESSION sql_log_bin=0;" -uuser -p db_name < dump_file.sql

    On the MASTER:

    • Repeat steps above

    This way there will be no downtime, minimal impact on replication, no unnecessary binlogs/network traffic.

    Leave a Reply

    Your email address will not be published. Required fields are marked *