0
0
Kubernetesdevops~10 mins

Memory requests and limits in Kubernetes - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Memory requests and limits
Pod Spec Created
Set Memory Request
Set Memory Limit
Scheduler Checks Requests
Pod Scheduled on Node
Container Runs
Memory Usage Monitored
If Usage > Limit
If Usage < Request
The pod spec defines memory requests and limits. The scheduler uses requests to place pods. The runtime enforces limits during execution.
Execution Sample
Kubernetes
apiVersion: v1
kind: Pod
metadata:
  name: memory-pod
spec:
  containers:
  - name: app
    resources:
      requests:
        memory: "200Mi"
      limits:
        memory: "500Mi"
This pod requests 200Mi memory and limits usage to 500Mi.
Process Table
StepActionMemory RequestMemory LimitScheduler DecisionContainer State
1Pod spec created200Mi500MiPending schedulingNot running
2Scheduler checks node capacity200Mi500MiNode with >=200Mi free selectedPending
3Pod scheduled on node200Mi500MiScheduledStarting container
4Container starts running200Mi500MiScheduledRunning, memory usage 150Mi
5Memory usage rises to 450Mi200Mi500MiScheduledRunning, memory usage 450Mi
6Memory usage exceeds 500Mi200Mi500MiScheduledContainer killed by OOM
7Container restarted200Mi500MiScheduledRunning, memory usage 180Mi
💡 Container killed when memory usage exceeded limit 500Mi
Status Tracker
VariableStartAfter Step 4After Step 5After Step 6After Step 7
Memory Usage0Mi150Mi450MiKilled (OOM)180Mi
Container StateNot runningRunningRunningKilledRunning
Key Moments - 3 Insights
Why does the container get killed even though it requested only 200Mi but used more?
The container requested 200Mi for scheduling but can use up to 500Mi. If usage exceeds 500Mi (limit), the container is killed as shown in step 6.
Can the scheduler place the pod on a node with only 150Mi free memory?
No, because the pod requests 200Mi memory. Scheduler must find a node with at least 200Mi free (step 2).
What happens if the container uses less memory than requested?
The container runs normally. Requests are minimum guaranteed memory for scheduling, not limits on usage (step 4).
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table, what is the container state at step 5 when memory usage is 450Mi?
APending
BRunning
CKilled
DNot running
💡 Hint
Check the 'Container State' column at step 5 in the execution_table.
At which step does the container get killed due to exceeding memory limit?
AStep 4
BStep 5
CStep 6
DStep 7
💡 Hint
Look for 'Container killed by OOM' in the 'Container State' column.
If the memory request was increased to 600Mi, what would happen during scheduling?
AScheduler would find a node with at least 600Mi free memory
BScheduler would ignore the request and schedule anyway
CContainer would be killed immediately
DMemory limit would automatically increase
💡 Hint
Memory requests affect scheduler decisions as shown in step 2.
Concept Snapshot
Memory requests reserve minimum memory for scheduling.
Memory limits cap max memory usage.
Scheduler uses requests to place pods.
Container killed if usage exceeds limit.
Requests ≤ Limits always.
Requests do not limit usage, limits do.
Full Transcript
In Kubernetes, memory requests and limits control how much memory a container reserves and can use. The pod spec sets these values. The scheduler uses the request to find a node with enough free memory. The container can use memory up to the limit. If usage goes beyond the limit, the container is killed by the system. This trace shows a pod requesting 200Mi and limiting 500Mi memory. The scheduler places it on a node with enough memory. The container runs normally until usage exceeds 500Mi, then it is killed and restarted. Requests ensure scheduling success, limits protect node stability.