Migrating 300 GB mysql database from one server to another cleanly with minimum downtime

Posted on

Question :

I have a 300 GB mysql database which i want to migrate to another server in order to setup Master-Master replication between the two and my main aim is to achieve this with least possible downtime.

My database has only one table around 30GB where inserts are happening round the clock. All the other tables are history tables(static).

1) What will be the best way to go forward in this case?

  • Taking mysqldump of the whole database and transferring the dump to
    other server and importing it to the new server will do the job i want to achieve but will take a lot of
    downtime (maybe more than 14hrs) which is not permissible.

  • Or can i take individual table dumps for the static tables without stopping mysql and import it to new server ,and after all this is over, take downtime for the single active table so that no new inserts will happen and both my databases will be in sync? First of all is it possible to do such a thing ? and if possible what issues it might present while setting up a master-master replication?

  • Or is there any other method to do this with very less downtime?

Answer :

This solution can be done with mysqldump but with a bit of a risk

For the sake of this example, suppose you have the following:

  • The database name is mydb
  • The source DB server’s private IP is 10.20.30.40
  • The target DB server’s private IP is 10.20.30.50
  • User is root on both source and target DB Servers
  • Password is whatever on both source and target DB Servers
  • Binary Logs on both servers are named mysql-bin

Here are your steps

STEP 01 : Create Replication User

On the Live Master, run the following

CREATE USER repluser@'%' IDENTIFIED BY 'replpass';
GRANT REPLICATION CLIENT,REPLICATION SLAVE ON *.* TO repluser@'%';

STEP 02 : Set Up Replication

On the target DB Server, run the following:

If you have GTID enabled on source and target servers, do this:

CHANGE MASTER TO
master_host='10.20.30.40',
master_port=3306,
master_user='repluser',
master_password='replpass',
master_auto_position=1;

If you do not have GTID enabled on source and target servers, do this:

CHANGE MASTER TO
master_host='10.20.30.40',
master_port=3306,
master_user='repluser',
master_password='replpass',
master_log_file='mysql-bin.000001',
master_log_pos=4;

STEP 03 : Create a script to perform live mysqldump and load

Create a shell script called live_dump_and_load.sh

Put the following lines in it

MYSQL_USER=root
MYSQL_PASS=whatever
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
DB_TO_DUMP=mydb
MYSQLDUMP_OPTIONS="--routines --triggers"
MYSQLDUMP_OPTIONS="${MYSQLDUMP_OPTIONS} --master-data=1"
MYSQLDUMP_OPTIONS="${MYSQLDUMP_OPTIONS} --single_transaction"
MYSQLDUMP_OPTIONS="${MYSQLDUMP_OPTIONS} -B ${DB_TO_DUMP}"

date > live_dump_and_load.runlog
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} | mysql -h10.20.30.50 ${MYSQL_CONN}
date >> live_dump_and_load.runlog

STEP 04 : Run the shell script

chmod +x live_dump_and_load.sh
nohup ./live_dump_and_load.sh &

STEP 05 : Monitor the run log

watch cat live_dump_and_load.runlog

STEP 06 : Start Replication

When the dump is loaded onto the target server, go to the target server and run

START SLAVE;

GIVE IT A TRY !!!

I did something similar about a week ago and I agree with MarCPlusPlus, xtrabackup is the way to go.

However I have a few tweaks to his procedure.

-When the backup is finished (or right before) halt all writes to the original master.

That depends on how many transactions/second occur in your original database.
In my case, I’m on an innodb database with GTID’s and I have about 5k transactions/s on a 260GB table. The lock at the end of the backup were about 30s. All clients recuperated really fast using the 3 spare cores that my CPU has. Customers didn’t notice anything.

Copying data from one to another host is the big deal. That takes it’s time and after start up the new machine, you’ll get a long time for the new machine to catch up with master.

To speed up that catch up use slave_parallel_workers.

put

slave_parallel_type             = LOGICAL_CLOCK

slave_parallel_workers          = 4

in your new hosts MySQL config file

In my case, replication catched up 10 hours behind master in half a day.

xtrabackup

I’ve been in similar situations before and it really makes this so much easier. As to exactly how you’ll utilize it to minimize downtime will be up to you to decide but one possible scenario could be:

  • Use the hot backup feature to eliminate downtime during the actual backup creation.
  • When the backup is finished (or right before) halt all writes to the original master.
  • Transfer the backup directory/files (should be similar to the main db directory) as fast as possible to the new slave concurrently, using rsync or lftp or something like that (bittorent would be the absolutely fastest but probably only necessary if the new slave is geographically far from the original).
  • Fire up the new slave as fast as possible. All configurations, setup, etc should be already be in place.
  • Re-enable writes on the original master with replication activated.

You can probably eliminate downtime by not halting inserts and just comparing the new 30gb table on the new server to its counterpart on the original server and then just update it according to an id offset or something similar. This is messy though and within reason I would prefer the downtime.

edit: You can also entirely eliminate downtime by using the logfile position listed in the file xtrabackup_binlog_info

see the following link: https://www.percona.com/doc/percona-xtrabackup/LATEST/howtos/setting_up_replication.html

Leave a Reply

Your email address will not be published. Required fields are marked *