Why documents are the unit of data in Elasticsearch - Performance Analysis
When working with Elasticsearch, documents are the main pieces of data we store and search.
We want to understand how the time it takes to handle data grows as we add more documents.
Analyze the time complexity of indexing multiple documents.
POST /my_index/_bulk
{ "index": { "_id": "1" } }
{ "name": "Alice", "age": 30 }
{ "index": { "_id": "2" } }
{ "name": "Bob", "age": 25 }
{ "index": { "_id": "3" } }
{ "name": "Carol", "age": 27 }
This code adds several documents to an index in bulk, each document representing one unit of data.
Look at what repeats when adding documents.
- Primary operation: Indexing each document one by one.
- How many times: Once per document added.
As you add more documents, the work grows with the number of documents.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 indexing operations |
| 100 | 100 indexing operations |
| 1000 | 1000 indexing operations |
Pattern observation: The time grows directly with the number of documents added.
Time Complexity: O(n)
This means the time to index data grows in a straight line as you add more documents.
[X] Wrong: "Adding more documents takes the same time no matter how many there are."
[OK] Correct: Each document needs its own processing, so more documents mean more work and more time.
Understanding how data size affects processing time helps you explain how Elasticsearch handles scaling in real projects.
"What if we indexed documents in parallel instead of one by one? How would the time complexity change?"