How oracle syncs data across nodes in a cluster

Posted on

Question :

If I have a oracle db server clustered across 2 nodes, it means both nodes have a copy of the whole data, the data is not lying half and half on each node.

Now when I make an update using my application connected to the cluster, (application deployed on WAS), i make an update on one of the nodes and then Oracle will sync it on the other node. My question is, will this sync happen in real time? Is it something configurable? It seems if it is in real time, then the performance will be affected?

What is the standard solution for that?

Answer :

Well actually….no. The second Development Milestone Release of MySQL Cluster 7.2 introduces support for what we call “Multi-Site Clustering”. In this post, I’ll provide an overview of this new capability, and considerations you need to make when considering it as a deployment option to scale geographically dispersed database services.

You can read more about MySQL Cluster 7.2.1 in the article posted on the MySQL Developer Zone.

MySQL Cluster has long offered Geographic Replication, distributing clusters to remote data centers to reduce the affects of geographic latency by pushing data closer to the user, as well as providing a capability for disaster recovery.

Multi-Site Clustering provides a new option for cross data center scalability. For the first time splitting data nodes across data centers is a supported deployment option. With this deployment model, users can synchronously replicate updates between data centers without needing to modify their application or schema for conflict handling, and automatically failover between those sites in the event of a node failure.

MySQL Cluster offers high availability by maintaining a configurable number of data replicas. All replicas are synchronously maintained by a built-in 2 phase commit protocol. Data node and communication failures are detected and handled automatically. On recovery, data nodes automatically rejoin the cluster, synchronize with running nodes, and resume service.

All replicas of a given row are stored in a set of data nodes known as a nodegroup. To provide service, a cluster must have at least one data node from each nodegroup available at all times. When the cluster detects that the last node in a nodegroup has failed, the remaining cluster nodes will be gracefully shutdown, to ensure the consistency of the stored databases on recovery.

Improvements to the heartbeating mechanism used by MySQL Cluster enables greater resilience to temporary latency spikes on a WAN, thereby maintaining operation of the cluster. A new “ConnectivityCheck” mechanism is introduced, which must be explicitly configured. This extra mechanism adds messaging overheads and failure handling latency, and so is not switched on by default.

enter image description here

When configuring Multi-Site clustering, the following factors must be considered:

Low bandwidth between data nodes can slow data node recovery. In normal operation, the available bandwidth can limit the maximum system throughput. If link saturation causes latency on individual links to increase, then node failures, and potentially cluster failure could occur.

Latency and performance
Synchronously committing transactions over a wide area increases the latency of operation execution and commit, therefore individual operations are slowed. To maintain the same overall throughput, higher client concurrency is required. With the same client concurrency level, throughput will decrease relative to a lower latency configuration.

Latency and stability
Synchronous operation implies that clients wait to hear of the success or failure of each operation before continuing. Loss of communication to a node, and high latency communication to a node are indistinguishable in some cases. To ensure availability, the Cluster monitors inter-node communication. If a node experiences high communication latency, then it may be killed by another node, to prevent its high latency causing service loss.

Where inter-node latencies fluctuate, and are in the same range as the node-latency-monitoring trigger levels, node failures can result. Node failures are expensive to recover from, and endanger Cluster availability.

To avoid node failures, either the latency should be reduced, or the trigger levels should be raised. Raising trigger levels can result in a longer time-to-detection of communication problems.

WAN latencies
Latency on an IP WAN may be a function of physical distance, routing hops, protocol layering, link failover times and rerouting times. The maximum expected latency on a link should be characterized as input to the cluster configuration.

Survivability of node failures
MySQL Cluster uses a fail fast mechanism to minimize time-to-recovery. Nodes that are suspected of being unreachable or dead are quickly excluded from the Cluster. This mechanism is simple and fast, but sometimes takes steps that result in unnecessary cluster failure. For this reason, latency trigger levels should be configured a safe margin
above the maximum latency variation on inter-data node links.

Users can configure various MySQL Cluster parameters including heartbeats, Connectivity_Check, GCP timeouts and transaction deadlock timeouts. You can read more about these parameters in the documentation.

Recommendations for Multi-Site Clustering
– Ensure minimal, stable latency;
– Provision the network with sufficient bandwidth for the expected peak load – test with node recovery and system recovery;
– Configure the heartbeat period to ensure a safe margin above latency fluctuations;

  • Configure the ConnectivtyCheckPeriod to avoid unnecessary node failures;

  • Configure other timeouts accordingly including the GCP timeout, transaction deadlock timeout, and transaction inactivity timeout.

The following is a recommendation of latency and bandwidth requirements for applications with high throughput and fast failure detection requirements:
– latency between remote data nodes must not exceed 20 milliseconds;
– bandwidth of the network link must be more than 1 Gigabit per Second.

For applications that do not require this type of stringent operating environment, latency and bandwidth can be relaxed, subject to the testing recommended above.

As the recommendations demonstrate, there are a number of factors that need to be considered before deploying multi-site clustering. For geo-redundancy, Oracle recommends Geographic Replication, but multi-site clustering does present an alternative deployment, subject to the considerations and constraints discussed above.

Leave a Reply

Your email address will not be published. Required fields are marked *