Kubernetes architecture (control plane and nodes) - Time & Space Complexity
We want to understand how the work done by Kubernetes grows as we add more nodes and workloads.
How does the system handle more tasks and keep everything running smoothly?
Analyze the time complexity of the control plane managing nodes.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: example-container
image: nginx
nodeSelector:
disktype: ssd
This snippet schedules a pod to a node with a specific label, showing how the control plane selects nodes.
Look at what repeats when scheduling pods.
- Primary operation: The scheduler checks all nodes to find a match for the pod's requirements.
- How many times: It does this for each pod and each node available.
As the number of nodes or pods grows, the scheduler's work grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 nodes, 10 pods | ~100 checks (each pod checks nodes) |
| 100 nodes, 100 pods | ~10,000 checks |
| 1000 nodes, 1000 pods | ~1,000,000 checks |
Pattern observation: The work grows quickly as both pods and nodes increase.
Time Complexity: O(p × n)
This means the scheduler's work grows proportionally to the number of pods times the number of nodes.
[X] Wrong: "The scheduler only checks one node per pod, so it's always fast."
[OK] Correct: The scheduler must consider all nodes to find the best fit, so work grows with nodes and pods.
Understanding how Kubernetes scales helps you explain system behavior and design choices clearly.
"What if the scheduler used caching to remember node states? How would that affect the time complexity?"