0
0
Kubernetesdevops~10 mins

Resource requests and limits in Kubernetes - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Resource requests and limits
Pod Spec Created
Define Resource Requests
Define Resource Limits
Pod Scheduled on Node
Kubelet Enforces Limits
Container Runs with Guaranteed Resources
Monitor Usage and Adjust if Needed
This flow shows how Kubernetes uses resource requests and limits to schedule and run pods with guaranteed CPU and memory.
Execution Sample
Kubernetes
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: app
    image: nginx
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
This pod requests 64Mi memory and 250m CPU, with limits set to 128Mi memory and 500m CPU.
Process Table
StepActionResource RequestsResource LimitsScheduler DecisionKubelet Enforcement
1Pod Spec Createdmemory=64Mi, cpu=250mmemory=128Mi, cpu=500mPendingNot applied yet
2Scheduler Checks Node Capacitymemory=64Mi, cpu=250mmemory=128Mi, cpu=500mNode selected with enough resourcesNot applied yet
3Pod Scheduled on Nodememory=64Mi, cpu=250mmemory=128Mi, cpu=500mScheduledNot applied yet
4Kubelet Starts Containermemory=64Mi, cpu=250mmemory=128Mi, cpu=500mRunningLimits enforced: max 128Mi memory, 500m CPU
5Container Runsmemory=64Mi, cpu=250mmemory=128Mi, cpu=500mRunningCPU throttled if >500m, OOM kill if >128Mi memory
6Resource Usage Monitoredmemory=64Mi, cpu=250mmemory=128Mi, cpu=500mRunningAdjust requests/limits if needed
7Pod Deleted or Completedmemory=64Mi, cpu=250mmemory=128Mi, cpu=500mTerminatedResources freed
💡 Pod lifecycle ends when container stops or pod is deleted
Status Tracker
VariableStartAfter Step 2After Step 4After Step 5Final
memory_requestundefined64Mi64Mi64Mi64Mi
cpu_requestundefined250m250m250m250m
memory_limitundefined128Mi128Mi128Mi128Mi
cpu_limitundefined500m500m500m500m
pod_statusPendingPendingRunningRunningTerminated
Key Moments - 3 Insights
Why does the pod stay in Pending state even if the resource requests are low?
The pod stays Pending until the scheduler finds a node with enough available resources to meet the requests, as shown in step 2 of the execution_table.
What happens if the container tries to use more CPU than its limit?
Kubelet throttles the CPU usage to the limit (500m in this example), preventing the container from using more CPU, as shown in step 5.
Can a container use less than its requested resources?
Yes, requests are minimum guaranteed resources, but the container can use less if available, as requests affect scheduling but not enforcement.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 3. What is the pod's status?
AScheduled
BRunning
CPending
DTerminated
💡 Hint
Check the 'Scheduler Decision' column at step 3 in the execution_table.
At which step does the kubelet start enforcing resource limits?
AStep 2
BStep 3
CStep 4
DStep 5
💡 Hint
Look at the 'Kubelet Enforcement' column to see when limits are applied.
If the memory limit was lowered to 64Mi, what would happen at step 5?
AContainer could use up to 128Mi memory
BContainer could be OOM killed if it uses more than 64Mi memory
CScheduler would reject the pod
DCPU limits would be ignored
💡 Hint
Refer to step 5's 'Kubelet Enforcement' about memory limits and OOM kill.
Concept Snapshot
Resource requests specify minimum CPU and memory guaranteed for a pod.
Resource limits specify maximum CPU and memory a pod can use.
Scheduler uses requests to place pods on nodes with enough resources.
Kubelet enforces limits during container runtime.
Exceeding limits can cause throttling (CPU) or termination (memory).
Requests and limits help ensure stable cluster resource usage.
Full Transcript
In Kubernetes, resource requests and limits control how much CPU and memory a pod can use. The pod spec defines requests as the minimum resources needed, and limits as the maximum allowed. When a pod is created, the scheduler looks for a node with enough free resources to meet the requests. Once scheduled, the kubelet enforces the limits during container runtime. If a container tries to use more CPU than its limit, it is throttled. If it uses more memory than its limit, it can be terminated by the system. This mechanism helps keep the cluster stable and fair for all workloads.