0
0
Kubernetesdevops~7 mins

Node affinity and anti-affinity in Kubernetes - Commands & Configuration

Choose your learning style9 modes available
Introduction
Sometimes you want your app to run on specific computers in your cluster or avoid certain ones. Node affinity and anti-affinity help you tell Kubernetes where to place your app pods based on labels on the nodes.
When you want your app to run only on nodes with special hardware like GPUs.
When you want to keep your app pods away from nodes that run other critical apps to avoid resource conflicts.
When you want to spread your app pods across different nodes for better availability.
When you want to schedule pods on nodes in a specific data center or zone.
When you want to avoid placing multiple pods of the same app on the same node to reduce risk.
Config File - pod-node-affinity.yaml
pod-node-affinity.yaml
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.23
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: zone
            operator: In
            values:
            - us-east-1a
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - example-app
        topologyKey: kubernetes.io/hostname

This file creates a pod named example-pod running nginx.

The nodeAffinity section has two parts:

  • requiredDuringSchedulingIgnoredDuringExecution: The pod must run on nodes labeled disktype=ssd.
  • preferredDuringSchedulingIgnoredDuringExecution: It prefers nodes in the us-east-1a zone but can run elsewhere if needed.

The podAntiAffinity section prevents this pod from running on the same node as other pods labeled app=example-app, spreading pods across nodes.

Commands
This command creates the pod with the node affinity and anti-affinity rules defined in the YAML file.
Terminal
kubectl apply -f pod-node-affinity.yaml
Expected OutputExpected
pod/example-pod created
This command shows the pods with details including which node they are running on, so you can verify the pod placement.
Terminal
kubectl get pods -o wide
Expected OutputExpected
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-pod 1/1 Running 0 10s 10.244.1.5 node-ssd-01 <none> <none>
-o wide - Shows extra details including the node name where the pod is running
This command shows detailed information about the pod, including the node affinity and anti-affinity rules applied and scheduling decisions.
Terminal
kubectl describe pod example-pod
Expected OutputExpected
Name: example-pod Namespace: default Node: node-ssd-01/192.168.1.10 Start Time: Thu, 01 Jun 2023 12:00:00 +0000 Labels: <none> Annotations: <none> Status: Running IP: 10.244.1.5 Containers: nginx: Image: nginx:1.23 State: Running Ready: True Restart Count: 0 Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10s default-scheduler Successfully assigned default/example-pod to node-ssd-01
Key Concept

If you remember nothing else from this pattern, remember: node affinity lets you choose nodes for your pods, and anti-affinity helps you avoid placing pods together on the same node.

Common Mistakes
Using node labels that do not exist on any node
The pod will stay pending because Kubernetes cannot find a node matching the affinity rules.
Check node labels with 'kubectl get nodes --show-labels' and use existing labels in your affinity rules.
Mixing required and preferred affinity without understanding their difference
Required rules must be met or the pod won't schedule; preferred rules are just preferences and may be ignored.
Use requiredDuringSchedulingIgnoredDuringExecution for must-have conditions and preferredDuringSchedulingIgnoredDuringExecution for nice-to-have conditions.
Not specifying topologyKey correctly in pod anti-affinity
Without a proper topologyKey, Kubernetes cannot spread pods as intended, leading to pods possibly running on the same node.
Use a valid topologyKey like 'kubernetes.io/hostname' to spread pods across nodes.
Summary
Create a pod YAML with nodeAffinity to specify which nodes the pod should run on.
Add podAntiAffinity to prevent pods from running on the same node as certain other pods.
Apply the YAML with kubectl and verify pod placement with 'kubectl get pods -o wide' and 'kubectl describe pod'.