OOMKilled containers in Kubernetes - Time & Space Complexity
When Kubernetes kills containers due to out-of-memory (OOM) errors, it's important to understand how the system checks and reacts as workload size changes.
We want to see how the time to detect and handle OOMKilled containers grows as the number of containers increases.
Analyze the time complexity of the following Kubernetes event watcher snippet.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app
image: busybox
resources:
limits:
memory: "100Mi"
command: ["sh", "-c", "sleep 3600"]
This pod runs a container with a memory limit. Kubernetes monitors containers and may OOMKill if they exceed this limit.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Kubernetes node agent checks memory usage of each container periodically.
- How many times: Once per container per check interval.
As the number of containers grows, the system checks each container's memory usage one by one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 containers | 10 memory checks |
| 100 containers | 100 memory checks |
| 1000 containers | 1000 memory checks |
Pattern observation: The number of checks grows directly with the number of containers.
Time Complexity: O(n)
This means the time to detect OOMKilled containers grows linearly with the number of containers running.
[X] Wrong: "Kubernetes checks all containers at once instantly, so time doesn't grow with more containers."
[OK] Correct: Each container's memory usage is checked individually in a loop, so more containers mean more checks and more time.
Understanding how Kubernetes monitors container resources helps you explain system behavior and scaling in real environments.
"What if Kubernetes used parallel checks for container memory usage? How would the time complexity change?"