Constant score query in Elasticsearch - Time & Space Complexity
We want to understand how the time to run a constant score query changes as the data grows.
Specifically, how does the query's work increase when there are more documents?
Analyze the time complexity of the following code snippet.
{
"constant_score": {
"filter": {
"term": { "status": "active" }
},
"boost": 1.2
}
}
This query finds documents where the status is "active" and assigns a fixed score to each match.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning the index to find documents matching the filter.
- How many times: Once per document or per matching term in the index.
As the number of documents grows, the query checks more entries to find matches.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: The work grows roughly in direct proportion to the number of documents.
Time Complexity: O(n)
This means the time to run the query grows linearly as the number of documents increases.
[X] Wrong: "The constant score query runs instantly no matter how big the data is."
[OK] Correct: Even though the score is fixed, the query still needs to check documents to find matches, so time grows with data size.
Understanding how queries scale helps you write efficient searches and explain your choices clearly in real projects.
"What if we replaced the term filter with a match_all filter? How would the time complexity change?"