Kibana setup and connection in Elasticsearch - Time & Space Complexity
When setting up Kibana and connecting it to Elasticsearch, it's important to understand how the time to establish this connection grows as the system scales.
We want to know how the connection process behaves when more data or nodes are involved.
Analyze the time complexity of the following Elasticsearch connection setup in Kibana.
PUT /_cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}
GET /_cluster/health?wait_for_status=green&timeout=30s
This snippet sets cluster routing and waits for the cluster to be healthy before Kibana connects.
Look for repeated checks or polling during connection setup.
- Primary operation: Polling the cluster health status repeatedly until green.
- How many times: Depends on cluster size and health; each poll checks all nodes.
As the number of nodes or shards increases, each health check inspects more components.
| Input Size (nodes/shards) | Approx. Operations |
|---|---|
| 10 | Checks 10 nodes/shards per poll |
| 100 | Checks 100 nodes/shards per poll |
| 1000 | Checks 1000 nodes/shards per poll |
Pattern observation: The work grows roughly in direct proportion to the number of nodes or shards.
Time Complexity: O(n)
This means the time to check cluster health grows linearly with the number of nodes or shards.
[X] Wrong: "The connection time stays the same no matter how big the cluster is."
[OK] Correct: Each health check must look at every node or shard, so more nodes mean more work and longer checks.
Understanding how connection and health checks scale helps you explain system responsiveness and reliability in real projects.
What if Kibana used cached health info instead of polling every time? How would the time complexity change?