Choose the best description of what a Container Network Interface (CNI) plugin does in a Kubernetes cluster.
Think about how pods get their IP addresses and talk to each other.
The CNI plugin is responsible for setting up networking for containers, including IP address assignment and routing, enabling pod-to-pod communication.
What output would you expect from kubectl get pods -o wide if the CNI plugin is correctly installed and working?
kubectl get pods -o wideLook for the presence of an IP address assigned to the pod.
When the CNI plugin is working, pods have IP addresses shown in the IP column. Without it, IP may be missing or pod status may be Pending.
Given this CNI configuration JSON snippet, which option correctly identifies the error?
{
"cniVersion": "0.3.1",
"name": "my-cni",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"rangeStart": "10.244.0.10",
"rangeEnd": "10.244.0.5",
"routes": [{"dst": "0.0.0.0/0"}]
}
}Check the IP range values for logical order.
The IP range start must be less than or equal to the range end. Here, rangeStart is 10.244.0.10 but rangeEnd is 10.244.0.5, which is invalid.
A pod in your Kubernetes cluster cannot reach other pods on different nodes. You suspect a CNI plugin issue. Which command helps you check if the CNI plugin is installed and running on a node?
CNI plugins often run as daemonsets in the kube-system namespace.
Daemonsets in kube-system namespace usually run CNI plugins on all nodes. Checking daemonsets helps verify if the CNI plugin is deployed.
What is the safest approach to upgrade a CNI plugin in a live Kubernetes cluster to avoid network downtime?
Consider how to keep network available while upgrading nodes.
Draining nodes one at a time ensures pods are safely moved and network remains available. Upgrading daemonset per node avoids cluster-wide downtime.