Alerting and notifications in Elasticsearch - Time & Space Complexity
When using alerting and notifications in Elasticsearch, it's important to understand how the time to check conditions and send alerts grows as data increases.
We want to know how the system's work changes when there are more documents or alerts to process.
Analyze the time complexity of the following alerting query and notification process.
POST /_watcher/watch/_execute
{
"watch": {
"trigger": { "schedule": { "interval": "1m" } },
"input": {
"search": {
"request": {
"indices": ["logs"],
"body": { "query": { "range": { "timestamp": { "gte": "now-1m" } } } }
}
}
},
"condition": { "compare": { "ctx.payload.hits.total.value": { "gt": 100 } } },
"actions": { "notify": { "email": { "to": "admin@example.com" } } }
}
}
This code runs a watch every minute, searches recent logs, checks if hits exceed 100, and sends an email alert.
Look at what repeats when this alert runs.
- Primary operation: Searching documents in the "logs" index within the last minute.
- How many times: Once every minute, but the search scans all documents in that time range.
As the number of documents in the last minute grows, the search takes longer.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 document checks |
| 100 | 100 document checks |
| 1000 | 1000 document checks |
Pattern observation: The work grows roughly in direct proportion to the number of documents in the time range.
Time Complexity: O(n)
This means the time to run the alert grows linearly with the number of documents checked.
[X] Wrong: "The alert runs instantly no matter how many documents there are."
[OK] Correct: The search must look at each relevant document, so more data means more work and longer time.
Understanding how alerting scales helps you design efficient monitoring systems and shows you can think about performance in real-world data tasks.
"What if we changed the time range from 1 minute to 1 hour? How would the time complexity change?"