I have a daily task that collects data from 3 MySql Servers and then pushes that aggregate data back out to each of the servers deleting the previous days data. All servers are MySql. I would like to put this into a stored proc.
I have been searching for a way to connect from one MySql Server to another, run a query, next server – rinse and repeat. Besides not finding any examples of a stored procedure that connects to another MySql server, my immediate impression is that this is not really acceptable, nor intended.
I can select into an outfile which creates a CSV. Is there any way to force an Xml document instead? I don’t want to have to read into a datatable just to get Xml. Of course, Xml won’t work with the Bulkload method, will it?
What are my realistic options? One concern about selecting into an outfile is the use of weird characters:
' , \ & @ `` / // etc…how will those characters affect the Bulkload process?
I tend to shy away from the FEDERATED storage engine for three(3) reasons:
- It only supports MyISAM
- DDL performed on the source table requires manually changing the FEDERATED table design on external services.
- Bulk operations against a FEDERATED table can become an instant mightmare !!!
What to do ???
One possibility is to use mysqldump and merge them
Assuming all tables have identical structures
Assuming all tables are not using auto_increment IDs
Assuming all tables are InnoDB
On the server that will combine the three tables, create four(4) import databases (one time operation):
CREATE DATABASE ImportDB1; CREATE DATABASE ImportDB2; CREATE DATABASE ImportDB3; CREATE DATABASE ImportCombined;
Import Process Goes Something Like This
mysqldump the three(3) tables
mysqldump -hHOSTIP1 -Pportnumber1 -u... -p... dbname tbname > /root/ImportTable1.sql & mysqldump -hHOSTIP2 -Pportnumber2 -u... -p... dbname tbname > /root/ImportTable2.sql & mysqldump -hHOSTIP3 -Pportnumber3 -u... -p... dbname tbname > /root/ImportTable3.sql & wait
Load the mysqldumps into the separate import databases in parallel:
mysql -hHOSTIP4 -u... -p... -A -DImportDB1 < /root/ImportTable1.sql & mysql -hHOSTIP4 -u... -p... -A -DImportDB2 < /root/ImportTable2.sql & mysql -hHOSTIP4 -u... -p... -A -DImportDB3 < /root/ImportTable3.sql & wait rm -f /root/ImportTable1.sql & rm -f /root/ImportTable2.sql & rm -f /root/ImportTable3.sql & wait
Next, combine the data.
DROP TABLE IF EXISTS ImportCombined.tbname; CREATE TABLE ImportCombined.tbname LIKE ImportDB1.tbname; ALTER TABLE ImportCombined.tbname DISABLE KEYS; INSERT INTO ImportCombined.tbname SELECT * FROM ImportDB1.tbname; INSERT INTO ImportCombined.tbname SELECT * FROM ImportDB2.tbname; INSERT INTO ImportCombined.tbname SELECT * FROM ImportDB3.tbname; ALTER TABLE ImportCombined.tbname ENABLE KEYS; ALTER TABLE ImportCombined.tbname ENABLE KEYS;
Now, perform the task you already have in place against the ImportCombined.tbname table.
“You fill in these steps”
Then, when the table is ready to be exported, mysqldump it and copy 2 times:
mysqldump -hHOSTIP4 -Pportnumber4 -u... -p... ImportCombined tbname > /root/ExportTable1.sql cp /root/ExportTable1.sql /root/ExportTable2.sql & cp /root/ExportTable1.sql /root/ExportTable3.sql & mysql -hHOSTIP1 -Pportnumber1 -u... -p... -A -Ddbname < /root/ExportTable1.sql & mysql -hHOSTIP2 -Pportnumber2 -u... -p... -A -Ddbname < /root/ExportTable2.sql & mysql -hHOSTIP3 -Pportnumber3 -u... -p... -A -Ddbname < /root/ExportTable3.sql & wait rm -f /root/ExportTable.sql &
As show from these steps, these things should be shell scripted and perhaps crontab’d.
Here is another maneuver: If the tables are MyISAM, you could copy the MyISAM tables (.frm, .MYD, and .MYI files for each table) in other folders (databases in the eyes of mysqld) instead of doing mysqldumps, and perform similar operations as mentioned before. Make sure no writes to the tables take place during the copy.
Have you considered using an ETL tool like http://www.pentaho.com/ for this?