Question :
I want to duplicate an innodb database on my mysql server for testing another version of the software.
Using tools for example phpmyadmin reduce the database around 20 MB and a “lot” of rows are missing.
Just copying the folder under /var/lib/mysql won’t work, because I didn’t enabled something special for the innodb engine.
How can I create a duplicate of an innodb database?
Answer :
a “lot” of rows are missing
Your interests would be best served if you’d investigate and come up with an explanation of why this is happening. It’s not acceptable for a tool or utility to behave mysteriously, so either the tool has a defect, or isn’t intended to be used as you are using it, or there is a quirk in your database structure that is interacting with something in the utility that you’re using to cause these problems.
As a DBA, I like to be as “operationally close” to the server as possible, without tools making decisions and being “helpful.” I use a graphical query browser for only three things: Running queries where I want to scroll through the results, editing stored programs, and adding comments to large numbers of undocumented table columns. Everything else, I do by hand because I like to understand exactly what the server is being asked to do, and I like the server to understand me, as well. So, here’s how I do what I think you are describing.
I assume you’re talking about cloning a schema on a single server, to a different schema on the same server.
To state it procedurally, “for each table (or other object) in `prod_db`, create an identically named object in `dev_db` including all of the indexes, foreign key constraints referencing the appropriate table in the new schema, and, optionally, copy all the data.”
Here’s how I do that.
$ mysqldump [options*] --databases prod_db | perl -pe 's/`prod_db`/`dev_db`/g' > dump.sql
This generates a human-readable dumpfile, with every instance of `prod_db` (including the backticks) replaced with `dev_db`. Perl handles that filtering as the dumpfile passes through the |
pipeline. Then I review the file’s contents for sanity and pipe it right back in the mysql:
$ mysql < dump.sql
Done.
Now for the details.
If I take the first few lines of the file generated by this command:
$ mysqldump [options] --databases sakila
I see this near the beginning of the file:
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `sakila` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci */;
USE `sakila`;
Now this:
$ mysqldump --databases sakila | perl -pe 's/`sakila`/`dolphin`/g'
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `dolphin` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci */;
USE `dolphin`;
Magic. This process carries one major caveat: you can’t have a table with the same name as the original database, because the table name `sakila` will be changed to `dolphin` just like the database name was changed.
*options for mysqldump
are very important here, because if you make a mistake, you run the risk of trashing your live database. This option is particularly useful.
--skip-add-drop-table
If you do a normal mysqldump
, before each CREATE TABLE
in the resulting file, there’s a DROP TABLE IF EXISTS
statement.
DROP TABLE IF EXISTS `actor`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `actor` ( ...
When you add the --skip-add-drop-table
to mysqldump
it does write the DROP TABLE IF EXISTS
statement into the dumpfile, so there is no danger of dropping the wrong table. The worst case is that it will try to create a table that already exists, and if that happens, the script will halt. Then you can figure out what the problem was.
You should also add the --verbose
option to mysqldump
because it shows you progress as the dump is proceeding.
--events --triggers --routines
If you have MySQL Event Scheduler events, or triggers, or stored functions/procedures (“routines”) these flags will ensure that mysqldump copies those, as well. You need to review them, however, to be sure that any references to the schema are correct, because the schema name may not be properly backticked in these. That shouldn’t be a major problem, because you don’t need to reference the schema when defining these — they run in the context of the schema in which they are defined.
--no-data
Use this option if you only want to clone the structures, not the rows in the tables. You can also use this option for a dry run to examine the resulting “dump.sql” file for sanity. Remember, these files contain nothing more or less than the SQL statements needed to recreate a database, and are therefore, human-readable.
--single-transaction
You want to use this when the tables you’re dumping are InnoDB because it won’t lock the tables or block other operations, but it will ensure that the data you get is consistent from table to table, because it issues START TRANSACTION WITH CONSISTENT SNAPSHOT
against the original server before extracting the dump file.
What’s Perl doing here?
perl -pe 's/`prod_db`/`dev_db`/g'
What we’re essentially saying is perl, -e
execute the following mini-script (in quotes) and -p
pretend that I’ve wrapped this instruction in a loop that says “read from standard in”, “apply changes to data”, “print resulting data to standard out”. s/
substitute, when you see `prod_db` replace /
it with `dev_db` and /
do this g
globally (instead of only replacing the first occurrence on any given line, replace every occurrence on every given line). Perl does this replacement very efficiently and is on virtually every system, and freely available for systems where it isn’t installed, including Windows. You could do the same thing with sed
, and the command would be similar, though not identical.
The steps above are essentially how I perform this operation when my developers need an entire schema cloned to the same server but with a different schema (database) name.
if you have the mysql-utilities
installed you can use mysqldbcopy
this happily works as a shell_exec
command in php
too.
example:
/* copy db from source to destination - this is actually pretty fast */
$command = "mysqldbcopy --source=user:pass@localhost --destination=user:pass@localhost test:test_2";
$output = shell_exec($command);
It depends on if you’re wanting to recreate a whole new instance of the entire database, or just duplicate a specific schema within one database instance.
If duplicating the entire database, it depends on what version of mysql you are running. For example, if you’re running 5.6 you can make use of the new transportable tablespace support. However if you are not running 5.6 then you have to either do a backup and restore to a new instance using xtrabackup with mysqld_multi or dump the database contents with mydumper and import that into your new database.