0
0
Kubernetesdevops~5 mins

Why resource management matters in Kubernetes - Why It Works

Choose your learning style9 modes available
Introduction
When you run many applications on the same computers, they can slow each other down or crash if they use too much memory or CPU. Resource management helps keep each app running smoothly by setting limits on how much CPU and memory they can use.
When you want to prevent one app from using all the CPU and slowing down others.
When you need to make sure an app does not crash because it runs out of memory.
When running multiple apps on the same server and you want to share resources fairly.
When you want to avoid unexpected costs by limiting resource use in cloud environments.
When you want to improve the stability and reliability of your applications.
Config File - pod.yaml
pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: resource-demo-pod
spec:
  containers:
  - name: demo-container
    image: nginx
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

This file creates a pod with one container running nginx.

The requests section tells Kubernetes the minimum CPU and memory the container needs to run.

The limits section sets the maximum CPU and memory the container can use.

This helps Kubernetes schedule the pod on a node that has enough resources and prevents the container from using too much.

Commands
This command creates the pod with resource requests and limits defined. Kubernetes will schedule it on a node that can provide the requested resources.
Terminal
kubectl apply -f pod.yaml
Expected OutputExpected
pod/resource-demo-pod created
This command shows the status and node where the pod is running, confirming it was scheduled successfully with resource management.
Terminal
kubectl get pods resource-demo-pod -o wide
Expected OutputExpected
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES resource-demo-pod 1/1 Running 0 10s 10.244.1.5 worker-node-1 <none> <none>
-o wide - Shows extra details like node name and IP address
This command shows detailed information about the pod, including the resource requests and limits set for the container.
Terminal
kubectl describe pod resource-demo-pod
Expected OutputExpected
Name: resource-demo-pod Namespace: default Node: worker-node-1/192.168.1.10 Start Time: Thu, 01 Jun 2023 10:00:00 +0000 Containers: demo-container: Image: nginx Limits: cpu: 500m memory: 128Mi Requests: cpu: 250m memory: 64Mi State: Running Ready: True Restart Count: 0 Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True
Key Concept

If you remember nothing else from this pattern, remember: setting resource requests and limits helps keep your apps stable and your cluster healthy.

Common Mistakes
Not setting resource requests and limits in pod specs.
Without these, Kubernetes cannot properly schedule pods or prevent one pod from using too many resources, causing instability.
Always define resource requests and limits for your containers to ensure fair resource sharing and stability.
Setting requests higher than limits.
This is invalid and Kubernetes will reject the pod because requests must be less than or equal to limits.
Make sure resource requests are equal to or less than the limits.
Setting very low limits that cause the container to be killed frequently.
If limits are too low, the container may run out of memory or CPU and be restarted often, causing downtime.
Set realistic limits based on your app's needs and monitor usage to adjust.
Summary
Define resource requests and limits in your pod configuration to control CPU and memory usage.
Apply the pod configuration with kubectl apply and verify the pod is running with kubectl get pods.
Use kubectl describe pod to check the resource settings and pod status for troubleshooting.