Why probes keep applications healthy in Kubernetes - Performance Analysis
We want to understand how the time it takes to check application health grows as the number of pods increases.
How does Kubernetes manage many health checks efficiently?
Analyze the time complexity of the following Kubernetes readiness probe configuration.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app-container
image: myapp:latest
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
This snippet configures a readiness probe that checks the /health endpoint every 10 seconds after a 5-second delay.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Kubernetes performs HTTP GET requests to the /health endpoint repeatedly.
- How many times: Each pod runs this check every 10 seconds continuously.
As the number of pods increases, the total number of health checks grows proportionally.
| Input Size (pods) | Approx. Health Checks per 10 seconds |
|---|---|
| 10 | 10 |
| 100 | 100 |
| 1000 | 1000 |
Pattern observation: The total checks increase linearly as more pods are added.
Time Complexity: O(n)
This means the total health check operations grow directly in proportion to the number of pods.
[X] Wrong: "Health checks run once and then stop, so adding more pods doesn't increase checks."
[OK] Correct: Health probes run repeatedly to keep checking pod status, so more pods mean more checks over time.
Understanding how repeated health checks scale helps you design systems that stay reliable as they grow.
"What if the probe interval is halved? How would the time complexity change?"