Replica set architecture mental model in MongoDB - Time & Space Complexity
When working with MongoDB replica sets, it's important to understand how the system handles data replication and failover.
We want to see how the time to replicate data grows as the amount of data or number of nodes increases.
Analyze the time complexity of data replication in a MongoDB replica set.
// Simplified replica set write operation
const writeData = (data) => {
primary.insert(data); // write to primary
secondaryNodes.forEach(node => {
node.syncFrom(primary); // replicate data
});
};
This code writes data to the primary node and then replicates it to all secondary nodes.
Look for repeated actions that affect performance.
- Primary operation: Loop over all secondary nodes to replicate data.
- How many times: Once per secondary node for each write.
As the number of secondary nodes grows, the replication work grows too.
| Input Size (number of secondaries) | Approx. Operations |
|---|---|
| 3 | 3 replication calls |
| 10 | 10 replication calls |
| 100 | 100 replication calls |
Pattern observation: The replication work increases directly with the number of secondary nodes.
Time Complexity: O(n)
This means the time to replicate data grows linearly with the number of secondary nodes.
[X] Wrong: "Replication time stays the same no matter how many secondaries there are."
[OK] Correct: Each secondary node must receive the data, so more nodes mean more replication steps.
Understanding how replication scales helps you explain system behavior and design choices clearly in interviews.
"What if we added asynchronous replication instead of synchronous? How would the time complexity change?"