Which of the following best describes the difference between requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution in Kubernetes node affinity?
Think about what happens if no nodes match the affinity rules.
requiredDuringSchedulingIgnoredDuringExecution enforces a hard rule: pods will only schedule on nodes that match the criteria. preferredDuringSchedulingIgnoredDuringExecution is a soft preference: the scheduler tries to place pods on matching nodes but will schedule elsewhere if needed.
Given a Kubernetes cluster with nodes labeled zone=us-east-1a and zone=us-east-1b, and a pod spec with this node affinity:
{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{"matchExpressions": [{"key": "zone", "operator": "In", "values": ["us-east-1a"]}]}
]
}
}What will be the output of kubectl get pods -o wide regarding the node the pod is scheduled on?
Check the meaning of requiredDuringSchedulingIgnoredDuringExecution.
The requiredDuringSchedulingIgnoredDuringExecution affinity restricts scheduling to nodes with the specified label. So the pod will only run on nodes labeled zone=us-east-1a.
Which of the following YAML snippets correctly configures a pod to avoid scheduling on nodes that already run pods with label app=frontend using requiredDuringSchedulingIgnoredDuringExecution node anti-affinity?
Remember that node anti-affinity uses podAntiAffinity, not nodeAffinity or podAffinity.
Node anti-affinity to avoid nodes running pods with certain labels uses podAntiAffinity with requiredDuringSchedulingIgnoredDuringExecution. Option D correctly uses this syntax.
A pod with this node affinity fails to schedule:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In
values:
- ssd
What is the most likely reason?
Check if any nodes have the required label.
If no nodes have the label disk=ssd, the requiredDuringSchedulingIgnoredDuringExecution affinity cannot be satisfied, so the pod will not schedule.
You want to deploy multiple replicas of a critical service to ensure high availability. Which node anti-affinity configuration best helps spread pods across different nodes to avoid single points of failure?
Think about how to guarantee pods do not land on the same node.
Using requiredDuringSchedulingIgnoredDuringExecution pod anti-affinity with topologyKey: kubernetes.io/hostname ensures pods are scheduled on different nodes, improving availability by avoiding single-node failure.