What is the default behavior of pod-to-pod traffic in a Kubernetes cluster when no NetworkPolicy is applied?
Think about Kubernetes default networking model before any policies are applied.
By default, Kubernetes allows all pod-to-pod traffic without restrictions unless NetworkPolicies are applied to restrict it.
Given the following NetworkPolicy YAML applied to namespace dev, what will be the output of kubectl get pods and the connectivity result when a pod in dev tries to receive traffic from a pod in prod?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: dev
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
Look at the from field and the podSelector scope.
The NetworkPolicy allows ingress only from pods selected by an empty podSelector in the same namespace (dev). Pods from other namespaces like prod are blocked.
Which NetworkPolicy YAML correctly restricts egress traffic from all pods in the default namespace to only the IP block 10.0.0.0/24?
Check the policyTypes and the IP block CIDR carefully.
Option C correctly sets policyTypes to Egress and restricts egress to the IP block 10.0.0.0/24. Other options either use wrong policyTypes or wrong CIDR.
You applied a NetworkPolicy to block all ingress traffic to pods in namespace test, but pods still receive traffic from other namespaces. What is the most likely reason?
Check the namespace where the NetworkPolicy is applied versus where the pods are.
NetworkPolicies are namespace-scoped. If applied in a different namespace than the pods, they do not affect those pods.
You want to restrict traffic so that pods in namespace frontend can only receive traffic from pods in namespace backend. Which sequence of steps correctly achieves this?
Think about both ingress and egress controls and where NetworkPolicies apply.
To restrict traffic properly, you must allow ingress in frontend from backend pods and also allow egress in backend to frontend pods. Both sides need policies to enforce restrictions.