Question :
While restoring a dump from a CloudSQL instance into a node of a Percona xtradb cluster, I am getting this error. The line 6955 in the .sql dump file is in bold below:
DROP TABLE IF EXISTS `next_balls`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `next_balls` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`match_id` int(11) NOT NULL DEFAULT '0',
`inning_id` int(11) NOT NULL DEFAULT '0',
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=176335 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Dumping data for table `next_balls`
--
****/*!40000 ALTER TABLE `next_balls` DISABLE KEYS */;****
The error came on restoring the table ‘next_balls’, this table was restored in the node, but data was not inserted, so I think the error comes due to the “ALTER TABLE next_balls
..” statement. Any idea why this error pops up? The restoration stops when the error comes. Any help would be highly appreciated.
Edit 1: I had a 2 node percona cluster, it seems the error comes when one of the node is down or feels that the other is down, which results in a split brain situation and the cluster halts. Is this the case? Should I add another node in the cluster and do the restore? Or should I first do the restore in one node and then set up the cluster?
Answer :
I had the same issue with gcloud percona instance running on 3 servers.
After running mysql logs from the first instances i found out it was not able to reach to instance-2. I also ssh to instance-2 but i could not reach it. On tailing mysql logs in instance-3 i found out its also not able to reach instance-2.
I restarted instance-2 and everything went back to normal.
With percona mysql clustering when one cluster is down and dirty_write turned off all the write operations are blocked untill all the clusters are restored.