0
0
Kubernetesdevops~5 mins

OOMKilled containers in Kubernetes - Commands & Configuration

Choose your learning style9 modes available
Introduction
Sometimes, containers in Kubernetes stop working because they use too much memory. This is called OOMKilled, which means the system killed the container to protect the server. Understanding why this happens helps keep your apps running smoothly.
When a container suddenly stops and shows OOMKilled status in Kubernetes.
When you want to prevent your app from using too much memory and crashing.
When you need to check if your memory limits are set correctly for your containers.
When troubleshooting why a pod restarts frequently without clear errors.
When optimizing resource use to avoid wasting server memory.
Config File - pod-memory-limit.yaml
pod-memory-limit.yaml
apiVersion: v1
kind: Pod
metadata:
  name: memory-demo
spec:
  containers:
  - name: memory-demo-container
    image: busybox
    command: ["sh", "-c", "sleep 3600"]
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"

This YAML file creates a pod named memory-demo with one container.

The resources section sets memory limits and requests:

  • requests.memory: The amount of memory Kubernetes reserves for the container.
  • limits.memory: The maximum memory the container can use before it is killed.

If the container uses more than 100Mi of memory, Kubernetes will kill it with OOMKilled.

Commands
This command creates the pod with memory limits set. It tells Kubernetes to start the container with the specified memory rules.
Terminal
kubectl apply -f pod-memory-limit.yaml
Expected OutputExpected
pod/memory-demo created
Check the status of the pod to see if it is running or if it was killed due to memory issues.
Terminal
kubectl get pods
Expected OutputExpected
NAME READY STATUS RESTARTS AGE memory-demo 1/1 Running 0 10s
Run a command inside the container to use more memory than the limit, causing OOMKilled. This simulates a memory overload.
Terminal
kubectl exec memory-demo -- sh -c "stress --vm 1 --vm-bytes 150M --vm-hang 0"
Expected OutputExpected
No output (command runs silently)
Check the reason why the container was terminated. It should show 'OOMKilled' if the container was killed due to memory overuse.
Terminal
kubectl get pod memory-demo -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}'
Expected OutputExpected
OOMKilled
Get detailed information about the pod, including events that show if the container was killed because it used too much memory.
Terminal
kubectl describe pod memory-demo
Expected OutputExpected
Name: memory-demo Namespace: default Containers: memory-demo-container: Container ID: docker://abcdef123456 Image: busybox State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled Exit Code: 137 Ready: False Restart Count: 3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning OOMKilled 2m kubelet Container memory limit exceeded
Key Concept

If a container uses more memory than its limit, Kubernetes kills it with OOMKilled to protect the system.

Common Mistakes
Not setting memory limits on containers.
Without limits, containers can use unlimited memory, causing the node to become unstable or crash.
Always set memory requests and limits in your pod specs to control resource use.
Setting memory limits too low for the app's needs.
The container will be killed frequently with OOMKilled, causing downtime.
Monitor your app's memory use and set limits high enough to avoid killing but low enough to protect the node.
Ignoring OOMKilled status and not checking pod events.
You miss the root cause of crashes and cannot fix memory issues.
Use kubectl describe and check container termination reasons to diagnose OOMKilled.
Summary
Set memory requests and limits in pod specs to control container memory use.
Use kubectl commands to create pods, check status, and diagnose OOMKilled events.
If a container exceeds its memory limit, Kubernetes kills it to protect the system.