Point-in-time API in Elasticsearch - Time & Space Complexity
When using the Point-in-time API in Elasticsearch, it's important to understand how the time to get results changes as your data grows.
We want to know how the cost of searching with a point-in-time snapshot grows when we ask for more results or have more data.
Analyze the time complexity of this Elasticsearch Point-in-time search snippet.
POST /my-index/_search
{
"pit": { "id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAA" },
"size": 100,
"query": { "match_all": {} },
"sort": ["_shard_doc"]
}
This code searches using a point-in-time snapshot to get consistent results across pages.
Look at what repeats when using the Point-in-time API.
- Primary operation: Scanning documents in shards using the point-in-time snapshot.
- How many times: Each search request reads a batch of documents (size), repeating until all results are fetched.
As you ask for more results, the number of operations grows roughly in proportion to how many documents you want.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1 batch read |
| 100 | About 1 batch read |
| 1000 | About 10 batch reads (if batch size is 100) |
Pattern observation: More results mean more batches to read, so execution grows linearly with requested results.
Time Complexity: O(n)
This means the time to get results grows roughly in direct proportion to how many documents you want to retrieve.
[X] Wrong: "Using point-in-time means the search time stays the same no matter how many results I ask for."
[OK] Correct: Even with point-in-time, Elasticsearch must read through documents to return results, so asking for more results takes more time.
Understanding how point-in-time searches scale helps you explain how Elasticsearch handles consistent snapshots and pagination efficiently in real projects.
What if we increased the batch size (size parameter) in the search? How would the time complexity change?