Memory requests and limits in Kubernetes - Time & Space Complexity
We want to understand how the time to schedule and run pods changes as we increase memory requests and limits in Kubernetes.
How does setting memory requests and limits affect the system's work as more pods are added?
Analyze the time complexity of the following Kubernetes pod spec snippet.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: example-container
image: nginx
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"
This snippet sets memory requests and limits for a container inside a pod, guiding how the scheduler allocates resources.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The Kubernetes scheduler checks each pod's memory requests against available nodes.
- How many times: For each pod, the scheduler compares memory requests with nodes, repeating this for all pods and nodes.
As the number of pods increases, the scheduler must check more memory requests against nodes.
| Input Size (n pods) | Approx. Operations (pod-node checks) |
|---|---|
| 10 | ~10 x number_of_nodes |
| 100 | ~100 x number_of_nodes |
| 1000 | ~1000 x number_of_nodes |
Pattern observation: The work grows linearly with the number of pods, as each pod's memory request is checked against nodes.
Time Complexity: O(n)
This means the scheduling time grows roughly in direct proportion to the number of pods requesting memory.
[X] Wrong: "Setting higher memory limits makes scheduling time grow exponentially."
[OK] Correct: The scheduler checks requests linearly per pod; limits affect runtime but not the scheduling checks count.
Understanding how resource requests affect scheduling helps you explain system behavior clearly and shows you grasp how Kubernetes manages resources efficiently.
"What if we added node selectors to pods? How would that change the time complexity of scheduling with memory requests?"