Atlas triggers overview in MongoDB - Time & Space Complexity
When using Atlas triggers, it's important to understand how the time to run your trigger changes as your data grows.
We want to know how the trigger's work scales when more data or events happen.
Analyze the time complexity of this Atlas trigger function.
exports = async function(changeEvent) {
const docId = changeEvent.documentKey._id;
const collection = context.services.get("mongodb-atlas").db("myDB").collection("myCollection");
const doc = await collection.findOne({_id: docId});
if (doc) {
await collection.updateOne({_id: docId}, {$set: {processed: true}});
}
};
This trigger runs when a document changes, reads that document, and updates a field.
Look for repeated actions inside the trigger.
- Primary operation: Reading and updating one document per trigger event.
- How many times: Once per trigger invocation, no loops inside the function.
The trigger handles one document change at a time, so the work per event stays the same no matter how many documents exist.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 reads and updates if 10 changes happen |
| 100 | 100 reads and updates if 100 changes happen |
| 1000 | 1000 reads and updates if 1000 changes happen |
Pattern observation: Each trigger event costs the same work, so total work grows linearly with number of events.
Time Complexity: O(1)
This means each trigger runs in constant time, doing a fixed amount of work per event regardless of data size.
[X] Wrong: "The trigger will slow down as the database grows because it scans all documents."
[OK] Correct: The trigger only reads and updates the changed document, not the whole collection, so its work stays constant per event.
Understanding how triggers work helps you design efficient event-driven systems that scale well as data grows.
"What if the trigger updated multiple documents instead of one? How would the time complexity change?"