0
0
Kubernetesdevops~7 mins

Pod affinity and anti-affinity in Kubernetes - Commands & Configuration

Choose your learning style9 modes available
Introduction
Sometimes you want certain pods to run close together or far apart on your cluster nodes. Pod affinity and anti-affinity help control where pods are placed based on other pods' locations to improve performance or reliability.
When you want two related services to run on the same node for faster communication.
When you want to spread replicas of a service across different nodes to avoid all going down at once.
When you want to avoid running pods that use a lot of resources on the same node to prevent overload.
When you want to ensure pods that share data are scheduled near each other to reduce network latency.
When you want to keep pods of the same type apart to improve fault tolerance.
Config File - pod-affinity.yaml
pod-affinity.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
  labels:
    app: my-app
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - my-app
        topologyKey: kubernetes.io/hostname
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - my-app
          topologyKey: failure-domain.beta.kubernetes.io/zone
  containers:
  - name: my-app-container
    image: nginx:1.23.3
    ports:
    - containerPort: 80

This file defines a pod named my-app-pod running an Nginx container.

The affinity section has two parts:

  • podAffinity: Requires this pod to be scheduled on the same node as other pods labeled app: my-app.
  • podAntiAffinity: Prefers to avoid scheduling this pod in the same zone as other pods labeled app: my-app, spreading pods across zones.

topologyKey controls the scope of affinity or anti-affinity, like node hostname or zone.

Commands
This command creates the pod with the specified affinity and anti-affinity rules to control where it runs.
Terminal
kubectl apply -f pod-affinity.yaml
Expected OutputExpected
pod/my-app-pod created
This shows the pods with their node assignments so you can verify if the pod affinity and anti-affinity rules worked.
Terminal
kubectl get pods -o wide
Expected OutputExpected
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-app-pod 1/1 Running 0 10s 10.244.1.5 worker-node1 <none> <none>
-o wide - Shows extra details including the node where each pod is running
This command shows detailed information about the pod, including the affinity rules and scheduling decisions.
Terminal
kubectl describe pod my-app-pod
Expected OutputExpected
Name: my-app-pod Namespace: default Node: worker-node1/192.168.1.10 Start Time: Thu, 01 Jun 2023 12:00:00 +0000 Labels: app=my-app Annotations: <none> Status: Running IP: 10.244.1.5 Containers: my-app-container: Image: nginx:1.23.3 Port: 80/TCP State: Running Ready: True Restart Count: 0 Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10s default-scheduler Successfully assigned default/my-app-pod to worker-node1 Normal Pulled 9s kubelet Container image "nginx:1.23.3" already present on machine Normal Created 9s kubelet Created container my-app-container Normal Started 9s kubelet Started container my-app-container
Key Concept

If you remember nothing else from this pattern, remember: pod affinity and anti-affinity let you control pod placement based on other pods to improve performance and reliability.

Common Mistakes
Using pod affinity without matching labels on other pods
The scheduler cannot find pods matching the label selector, so affinity rules have no effect and pods may schedule anywhere.
Ensure the label selectors in affinity rules match labels on existing pods you want to be near or avoid.
Confusing requiredDuringScheduling with preferredDuringScheduling
Required rules must be met or pod won't schedule; preferred rules are soft preferences and may be ignored if not possible.
Use requiredDuringScheduling for strict placement needs and preferredDuringScheduling for flexible preferences.
Using the wrong topologyKey like a non-existent label
The scheduler cannot group nodes correctly, so affinity or anti-affinity rules fail or behave unexpectedly.
Use valid topology keys like kubernetes.io/hostname for nodes or failure-domain.beta.kubernetes.io/zone for zones.
Summary
Create a pod manifest with affinity and anti-affinity rules to control pod placement.
Apply the manifest with kubectl apply to create the pod with these rules.
Use kubectl get pods -o wide and kubectl describe pod to verify pod placement and affinity details.