Resource requests and limits in Kubernetes - Time & Space Complexity
We want to understand how setting resource requests and limits affects scheduling and running containers in Kubernetes.
How does the system's work grow as we add more containers with these settings?
Analyze the time complexity of scheduling pods with resource requests and limits.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app
image: nginx
resources:
requests:
cpu: "100m"
memory: "200Mi"
limits:
cpu: "200m"
memory: "400Mi"
This pod requests and limits CPU and memory resources for scheduling and runtime enforcement.
- Primary operation: Scheduler checks each node's available resources against pod requests.
- How many times: For each pod, the scheduler loops through all nodes to find a fit.
As the number of pods increases, the scheduler must check more pods against nodes.
| Input Size (pods) | Approx. Operations (node checks) |
|---|---|
| 10 | 10 x number_of_nodes |
| 100 | 100 x number_of_nodes |
| 1000 | 1000 x number_of_nodes |
Pattern observation: The work grows linearly with the number of pods times the number of nodes.
Time Complexity: O(p x n)
This means the scheduler's work grows proportionally to the number of pods times the number of nodes.
[X] Wrong: "Scheduling time depends only on the number of pods."
[OK] Correct: The scheduler must check each pod against all nodes, so nodes count matters too.
Understanding how resource requests and limits affect scheduling helps you explain system behavior clearly and shows you grasp real cluster operations.
"What if the scheduler used a cache to track node resources instead of checking all nodes each time? How would the time complexity change?"