0
0
KubernetesDebug / FixBeginner · 4 min read

How to Fix OOMKilled Error in Kubernetes Pods

The OOMKilled error in Kubernetes happens when a pod uses more memory than its limit. To fix it, increase the pod's memory limits and requests in its resource configuration to match its actual usage.
🔍

Why This Happens

The OOMKilled error occurs when a container inside a Kubernetes pod uses more memory than the limit set in its resource configuration. Kubernetes then stops the container to protect the node from running out of memory. This usually happens if the memory limits are too low or the application has a memory leak.

yaml
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: busybox
    command: ['sh', '-c', 'sleep 3600']
    resources:
      limits:
        memory: "100Mi"
      requests:
        memory: "50Mi"
Output
Status: OOMKilled Reason: Container killed because it used more memory than the 100Mi limit
🔧

The Fix

To fix OOMKilled, increase the memory limits and requests in the pod's resource settings to values that fit your application's needs. This prevents Kubernetes from killing the container due to memory overuse.

yaml
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: busybox
    command: ['sh', '-c', 'sleep 3600']
    resources:
      limits:
        memory: "500Mi"
      requests:
        memory: "300Mi"
Output
Pod runs successfully without OOMKilled error
🛡️

Prevention

To avoid OOMKilled errors in the future, always monitor your application's memory usage and set resource requests and limits accordingly. Use tools like kubectl top or monitoring dashboards. Also, consider adding memory limits that are slightly higher than peak usage and test your app under load.

Implement resource quotas and use liveness probes to restart unhealthy pods. Regularly review and update resource settings as your app evolves.

⚠️

Related Errors

Other errors related to resource limits include:

  • CrashLoopBackOff: Often caused by repeated OOMKilled events.
  • Evicted: Pod removed due to node resource pressure.
  • ContainerCannotRun: Happens if resource requests exceed node capacity.

Fixes usually involve adjusting resource requests and limits or scaling your cluster.

Key Takeaways

OOMKilled means your pod used more memory than its limit and was stopped.
Increase memory limits and requests in pod specs to fix OOMKilled errors.
Monitor memory usage regularly to set accurate resource limits.
Use Kubernetes tools like kubectl top and resource quotas for prevention.
Related errors often point to resource misconfiguration or node pressure.