In Kubernetes, the scheduler decides which node will run a new Pod. What is the main factor the scheduler uses to make this decision?
Think about how the scheduler ensures Pods have enough CPU and memory.
The scheduler checks each node's available resources and places the Pod on a node that can satisfy the Pod's resource requests and constraints.
You create a Pod with a nodeSelector that requires label disktype=ssd. What will happen if no nodes have this label?
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
nodeSelector:
disktype: ssd
EOFConsider what happens when no nodes match the Pod's scheduling constraints.
If no nodes have the required label, the scheduler cannot place the Pod, so it stays Pending until a matching node is available.
Put these steps in the correct order that the Kubernetes scheduler uses to place a Pod on a node.
Think about filtering before scoring.
The scheduler first filters nodes that can't run the Pod, then scores the remaining nodes, selects the best one, and finally binds the Pod to it.
A Pod is stuck in Pending state. The cluster has nodes with enough resources. What could be a reason related to scheduling?
Focus on scheduling constraints, not container runtime issues.
If the Pod's nodeSelector does not match any node, the scheduler cannot place it, so it stays Pending even if nodes have resources.
You want to deploy multiple replicas of a Pod to ensure high availability. Which scheduling feature helps spread Pods across nodes to avoid single points of failure?
Think about spreading Pods to avoid failure impact.
Pod anti-affinity rules tell the scheduler to avoid placing Pods on the same node, improving availability.