how to dump via mysqldump hugh table into chunks

Posted on

Question :

i have very hugh table , around 100 M records and 100 GB in a dump file , when i try to restore it it to a different DB i get sql query lost connection , i want to try and dump this table into chunks (something like 10 chinks of 10 GB) where each chink will be in seperate table.
what i managed to optimized so far is this :

mysqldump --skip-triggers --compact --no-create-info --single-transaction --quick --max_allowed_packet 1G  -h {host} -u {user} -P 3306 -p{pwasword} {my_schema} {}> /mnt/datadir/{table_name}.sql

and now the output is that i have 1 file {table_name}.sql i na size of 100 GB i want to get 10 files in sizes of 10 GB each

Answer :

You can use primary key to specify to dump only id in those range

mysqldump -u root -p db_name table_name --where='id  < 10000' > test1.sql
mysqldump -u root -p db_name table_name --where='id between 10000 and 20000' > test2.sql
.....
 mysqldump -u root -p db_name table_name --where='id between 900000 and 1000000' > test11.sql

Maybe kind of a late comment here. But if it is a Mariadb database then you can use mariabackup utility and run a backup for that database-table and then scp to other server or maybe you want to import data at same server into other table: https://mariadb.com/kb/en/restoring-individual-tables-and-partitions-with-mariabackup/

we used this way to move production data to the slave server and mariabackup took some minutes to dump a database with 50 GB.

Another way to use SSIS or mysql integration tool with small batch inserts (5000-10000 rows).

Leave a Reply

Your email address will not be published. Required fields are marked *