Replication in MongoDB
A replica set is a group of MongoDB databases that maintain the same data set. A replica set contains several data bearing nodes and optionally one arbiter node. Of the data bearing nodes, one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
The primary node receives all write operations. A replica set can have only one primary capable of confirming writes; although in some circumstances, another database may transiently believe itself to also be primary. The primary records all changes to its data sets in its operation log, e.g. oplog.
Replica set members send heartbeats (pings) to each other every two seconds. If a heartbeat does not return within 10 seconds, the other members mark the delinquent member as inaccessible.
Replica Set Primary
The primary is the only member in the replica set that receives write operations. MongoDB applies write operations on the primary and then records the operations on the primary’s oplog. Secondary members replicate this log and apply the operations to their data sets.
In the following three-member replica set, the primary accepts all write operations. Then the secondaries replicate the oplog to apply to their data sets.
All members of the replica set can accept read operations. However, by default, an application directs its read operations to the primary member.
The replica set can have at most one primary. If the current primary becomes unavailable, an election determines the new primary.
In the following 3-member replica set, the primary becomes unavailable. This triggers an election which selects one of the remaining secondaries as the new primary.
After a replica set has a stable primary, the election algorithm will make the secondary with the highest priority available call an election. Member priority affects both the timing and the outcome of elections; secondaries with higher priority call elections relatively sooner than secondaries with lower priority, and are also more likely to win. However, a lower priority instance can be elected as primary for brief periods, even if a higher priority secondary is available. Replica set members continue to call elections until the highest priority member available becomes primary.
Replica Set Secondary
A secondary maintains a copy of the primary’s data set. To replicate data, a secondary applies operations from the primary’s oplog to its own data set in an asynchronous process. A replica set can have one or more secondaries.
The following three-member replica set has two secondary members. The secondaries replicate the primary’s oplog and apply the operations to their data sets.
Although clients cannot write data to secondaries, clients can read data from secondary members.
A secondary can become a primary. If the current primary becomes unavailable, the replica set holds an election to choose which of the secondaries becomes the new primary.
In the following three-member replica set, the primary becomes unavailable. This triggers an election where one of the remaining secondaries becomes the new primary.
You can configure a secondary member for a specific purpose. You can configure a secondary to:
- Prevent it from becoming a primary in an election, which allows it to reside in a secondary data center or to serve as a cold standby.
- Prevent applications from reading from it, which allows it to run applications that require separation from normal traffic.
- Keep a running 'historical' snapshot for use in recovery from certain errors, such as unintentionally deleted databases. .
Replica Set Arbiter
An arbiter does not have a copy of data set and cannot become a primary. Replica sets may have arbiters to add a vote in elections for primary. Arbiters always have exactly 1
election vote, and thus allow replica sets to have an uneven number of voting members without the overhead of an additional member that replicates data.
For example, in the following replica set, an arbiter allows the set to have an odd number of votes for elections:
- Authentication: when running authorisation, arbiters exchange credentials with other members of the set to authenticate via keyfiles. MongoDB encrypts the authentication process. The MongoDB authentication exchange is cryptographically secure.
- Communication: because arbiters do not store data, they do not possess the internal table of user and role mappings used for authentication. Thus, the only way to log on to an arbiter with authorisation active is to use the localhost exception. The only communication between arbiters and other set members are: votes during elections, heartbeats, and configuration data. These exchanges are not encrypted.
Related information
-
ClearWay™ System Design (Witness 4.0)
-
AdvanceGuard® System Design (Witness 4.0)
-
Notes (Witness 4.0)
-
Database Configuration (Witness 4.0)
-
Redundancy (Witness 4.0)
-
Database Replication (Witness 4.0)
-
Sea360® System Design (Witness 4.0)
-
Launching Trebuchet (Witness 4.0)
-
Configuring and Launching Apps for Trebuchet (Witness 4.0)
-
Trebuchet Configuration File (Witness 4.0)
-
Trebuchet Radar Discovery (Witness 4.0)
-
Trebuchet App Selection (Witness 4.0)
-
Trebuchet Launcher (Witness 4.0)
-
Database Authentication (Witness 4.0)
-
Restoring A Backup From A Different System (Witness 4.0)