Database Replication
Introduction
The page discusses the use of MongoDB software and database configuration.
Contents
Overview
MongoDB stores data in flexible, JavaScript-like documents, (see https://www.mongodb.com/what-is-mongodb) meaning fields can vary from document to document and data structure can be changed over time. In order to back-up this data, and make it accessible, a database replication system must be set up.
Database replication is the frequent electronic copying data from a database in one computer or server to a database in another so that all users share the same level of information. The result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others. A distributed database management system ensures that changes, additions, and deletions performed on the data at any given location are automatically reflected in the data stored at all the other locations. Therefore, every user always sees data that is consistent with the data seen by all the other users.
In MongoDB, there are different configurations of database replication that can be used; it is up to the user to decided how their database system is configured.
"It is normal for replica set members to use different amounts of disk space Factors including: different oplog sizes, different levels of storage fragmentation, and MongoDB’s data file pre-allocation can lead to some variation in storage utilization between nodes. Storage use disparities will be most pronounced when you add members at different times."
Replication in MongoDB
A replica set is one or more MongoDB databases that maintain the same data set. A replica set contains several data bearing databases and optionally one arbiter database. Of the data bearing databases, one and only one member is deemed the primary, while the others are deemed secondary. There can be up to 50 replica databases supported.
Replication can occur across the internet and WAN connections.
The primary database receives all write operations. A replica set can have only one primary capable of confirming writes; although in some circumstances, another database may temporarily believe itself to also be primary. The primary records all changes to its data sets in its operation log: a groups of documents that keep a rolling record of all operations that modify the data stored in your databases. The secondary members then copy and apply these operations in an asynchronous process. All replica set members contain a copy of the operations log, which allows them to maintain the current state of the database.
Replica set members send heartbeats (pings) to each other every two seconds. If a heartbeat does not return within 10 seconds, the other members mark the non-responsive member as inaccessible. Heartbeats facilitate replications: any secondary member can import operation log entries from any other member it receives a heartbeat from.
Choosing a new primary requires voting logic, and reliable voting logic involves an odd number of three members or more. All perform identical functions and the outputs are compared by the voting logic. The voting logic establishes a majority when there is a disagreement, and the majority will act to deactivate the output from other device(s) that disagree. A single fault will not interrupt normal operation.
Replica Set Primary
The primary is the only member in the replica set that receives write operations. MongoDB applies write operations on the primary and then records the operations on the primary’s operation log. Secondary members replicate this log and apply the operations to their data sets.
In the following three-member replica set, the primary accepts all write operations. Then the secondaries replicate the operation log data to apply to their data sets.
All members of the replica set can accept read operations. However, by default, an application directs its read operations to the primary member.
The replica set can have at most one primary. If the current primary becomes unavailable, an election determines the new primary.
In the following 3-member replica set, the primary becomes unavailable. This triggers an election which selects one of the remaining secondaries as the new primary.
After a replica set has a stable primary, the election algorithm will make the secondary with the highest priority available call an election. Member priority affects both the timing and the outcome of elections; secondaries with higher priority call elections relatively sooner than secondaries with lower priority, and are also more likely to win. However, a lower priority instance can be elected as primary for brief periods, even if a higher priority secondary is available. Replica set members continue to call elections until the highest priority member available becomes primary.
Replica Set Secondary
A secondary maintains a copy of the primary’s data set. To replicate data, a secondary applies operations from the primary’s operation log to its own data set in an asynchronous process. A replica set can have one or more secondaries.
The following three-member replica set has two secondary members. The secondaries replicate the primary’s operation log and apply the operations to their data sets.
Although clients cannot write data to secondaries, clients can read data from secondary members.
A secondary can become a primary. If the current primary becomes unavailable, the replica set holds an election to choose which of the secondaries becomes the new primary.
In the following three-member replica set, the primary becomes unavailable. This triggers an election where one of the remaining secondaries becomes the new primary.
You can configure a secondary member for a specific purpose. You can configure a secondary to:
Prevent it from becoming a primary in an election, which allows it to reside in a secondary data center or to serve as a cold standby.
Prevent applications from reading from it, which allows it to run applications that require separation from normal traffic.
Keep a running 'historical' snapshot for use in recovery from certain errors, such as unintentionally deleted databases. .
Replica Set Hidden
A hidden member maintains a copy of the primary’s data set but is invisible to client applications. Hidden members are good for workloads with different usage patterns from the other members in the replica set. Hidden members must always be low priority members and so cannot become primary. Hidden members, however, may vote in elections. Hidden members receive no traffic other than basic replication. Use hidden members for dedicated tasks such as reporting and backups.
In the following five-member replica set, all four secondary members have copies of the primary’s data set, but one of the secondary members is hidden.
Replica Set Delayed
Delayed members contain copies of a replica set’s data set. However, a delayed member’s data set reflects an earlier, or delayed, state of the set. For example, if the current time is 09:52 and a member has a delay of an hour, the delayed member has no operation more recent than 08:52.
Because delayed members are a “rolling backup” or a running “historical” snapshot of the data set, they may help you recover from various kinds of human error. For example, a delayed member can make it possible to recover from unsuccessful application upgrades and operator errors including dropped databases and collections.
Delayed members:
Must be low priority members. Set the priority to 0 to prevent a delayed member from becoming primary.
Should be hidden members. Always prevent applications from seeing and querying delayed members.
Can vote in elections for primary.
Must have a delay equal to or greater than your expected maintenance window durations.
Must have a delay smaller than the capacity of the operations logs.
In the following 5-member replica set, the primary and all secondaries have copies of the data set. One member applies operations with a delay of 3600 seconds (one hour). This delayed member is also hidden and is a priority 0 member:
Replica Set Arbiter
An arbiter does not have a copy of data set and cannot become a primary. Replica sets may have arbiters to add a vote in elections for primary. Arbiters always have exactly 1 election vote, and thus allow replica sets to have an uneven number of voting members without the overhead of an additional member that replicates data.
For example, in the following replica set, an arbiter allows the set to have an odd number of votes for elections:
Authentication: when running authorisation, arbiters exchange credentials with other members of the set to authenticate via keyfiles. MongoDB encrypts the authentication process. The MongoDB authentication exchange is cryptographically secure.
Communication: because arbiters do not store data, they do not possess the internal table of user and role mappings used for authentication. Thus, the only way to log on to an arbiter with authorisation active is to use the localhost exception. The only communication between arbiters and other set members are: votes during elections, heartbeats, and configuration data. These exchanges are not encrypted.
MongoDB Requirements
Storage Engine | Default Operations Log Size | Lower Bound | Upper Bound |
---|---|---|---|
WiredTiger Storage Engine | 5% of free disk space | 990 MB | 50 GB |
MongoDB no longer supports 32-bit x86 platforms.
Related Information
-
ClearWay™ System Design (Witness 4.0)
-
Assigning a Camera (Witness 4.0)
-
AdvanceGuard® System Design (Witness 4.0)
-
Camera Controllers (Witness 4.0)
-
Management Server Configuration (Witness 4.0)
-
Track Engine Configuration (Witness 4.0)
-
Notes (Witness 4.0)
-
Database Configuration (Witness 4.0)
-
Navigating AdvanceGuard® (Witness 4.0)
-
Redundancy (Witness 4.0)