0
0
Kubernetesdevops~10 mins

Taints and tolerations in Kubernetes - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Taints and tolerations
Add Taint to Node
Pod Scheduled?
Pod Allowed
This flow shows how a node with a taint affects pod scheduling. Pods must have matching tolerations to be allowed on tainted nodes.
Execution Sample
Kubernetes
kubectl taint nodes node1 key=value:NoSchedule
kubectl apply -f pod.yaml
# pod.yaml has tolerations for key=value
kubectl get pods
kubectl describe pod pod1
This sequence adds a taint to a node, applies a pod with matching tolerations, then checks pod status and details.
Process Table
StepActionNode StatePod TolerationsScheduling ResultNotes
1Add taint key=value:NoSchedule to node1node1 tainted with key=value:NoScheduleN/AN/ANode now rejects pods without matching toleration
2Apply pod1 with toleration key=value:NoSchedulenode1 taintedtolerates key=value:NoSchedulePod scheduledPod allowed on tainted node due to toleration
3Apply pod2 without tolerationsnode1 taintedno tolerationsPod pendingPod rejected by scheduler due to taint
4Describe pod1node1 taintedtolerates key=value:NoScheduleRunningPod is running on tainted node
5Describe pod2node1 taintedno tolerationsPendingPod waiting for suitable node
6Remove taint from node1node1 no taintsN/AN/ANode accepts all pods now
7Pod2 scheduled after taint removalnode1 no taintsno tolerationsPod scheduledPod2 now runs on node
💡 Pods without matching tolerations cannot schedule on tainted nodes; removing taint allows scheduling.
Status Tracker
VariableStartAfter Step 1After Step 2After Step 3After Step 6Final
node1_taintsnonekey=value:NoSchedulekey=value:NoSchedulekey=value:NoSchedulenonenone
pod1_tolerationsnonenonekey=value:NoSchedulekey=value:NoSchedulekey=value:NoSchedulekey=value:NoSchedule
pod2_tolerationsnonenonenonenonenonenone
pod1_statusN/AN/AScheduledScheduledScheduledRunning
pod2_statusN/AN/APendingPendingScheduledRunning
Key Moments - 3 Insights
Why does pod2 stay pending even though node1 exists?
Because node1 has a taint key=value:NoSchedule and pod2 does not have a matching toleration, so the scheduler blocks pod2 from running on node1 (see execution_table step 3).
What allows pod1 to run on the tainted node?
Pod1 has a toleration matching the node's taint key=value:NoSchedule, so the scheduler permits it to run on node1 despite the taint (see execution_table step 2).
What happens when the taint is removed from node1?
All pods, including those without tolerations like pod2, can schedule on node1 because the taint no longer blocks them (see execution_table steps 6 and 7).
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table at step 3. What is the scheduling result for pod2?
APod running
BPod scheduled
CPod pending
DPod deleted
💡 Hint
Check the 'Scheduling Result' column for step 3 in the execution_table.
At which step is the taint removed from node1?
AStep 2
BStep 6
CStep 4
DStep 7
💡 Hint
Look for the action mentioning 'Remove taint' in the execution_table.
If pod1 did not have the toleration, what would happen at step 2?
APod1 would be pending
BPod1 would be scheduled
CPod1 would be deleted
DPod1 would run on another node
💡 Hint
Refer to the logic in concept_flow and the scheduling results for pods without tolerations.
Concept Snapshot
Taints mark nodes to repel pods.
Pods must have matching tolerations to run on tainted nodes.
Taint format: key=value:effect (e.g., NoSchedule).
Tolerations in pod spec allow ignoring taints.
Without toleration, pods stay pending on tainted nodes.
Removing taint allows all pods to schedule.
Full Transcript
Taints and tolerations control pod scheduling on Kubernetes nodes. When a node is tainted, it repels pods unless they have a matching toleration. The flow starts by adding a taint to a node. Then, when pods are scheduled, the scheduler checks if the pod tolerates the node's taint. If yes, the pod runs on the node; if not, the pod remains pending. Removing the taint allows all pods to schedule on the node. This mechanism helps control workload placement by marking nodes with special conditions and letting only certain pods run there.