Read concern and write concern in transactions in MongoDB - Time & Space Complexity
When using transactions in MongoDB, read and write concerns control how data is read and saved safely.
We want to understand how the time to complete a transaction changes as data size grows.
Analyze the time complexity of this transaction with read and write concerns.
const session = client.startSession();
session.startTransaction({
readConcern: { level: 'snapshot' },
writeConcern: { w: 'majority' }
});
const docs = await collection.find({ status: 'active' }, { session }).toArray();
await collection.updateMany({ status: 'active' }, { $set: { processed: true } }, { session });
await session.commitTransaction();
session.endSession();
This code reads active documents and updates them inside a transaction with specific read and write guarantees.
Look for repeated work that grows with input size.
- Primary operation: Scanning documents matching the filter and updating them.
- How many times: Once for each matching document in the collection.
As the number of active documents grows, the time to read and update them grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 reads and 1 update |
| 100 | About 100 reads and 1 update |
| 1000 | About 1000 reads and 1 update |
Pattern observation: The work grows roughly in direct proportion to the number of matching documents.
Time Complexity: O(n)
This means the time to complete the transaction grows linearly with the number of documents processed.
[X] Wrong: "Transactions always take the same time regardless of data size because they run as one unit."
[OK] Correct: The transaction time depends on how many documents are read and written; more data means more work and longer time.
Understanding how transaction time grows helps you design better database operations and explain your choices clearly in discussions.
What if we changed the read concern level from 'snapshot' to 'local'? How would the time complexity change?