Ingress and egress rules in Kubernetes - Time & Space Complexity
We want to understand how the time to process network rules grows as we add more rules in Kubernetes.
How does adding more ingress and egress rules affect the time Kubernetes takes to enforce them?
Analyze the time complexity of the following Kubernetes NetworkPolicy snippet.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-policy
spec:
podSelector:
matchLabels:
role: db
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 3306
This policy controls which pods can send traffic to or receive traffic from the selected pods.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Kubernetes checks each network packet against all ingress and egress rules.
- How many times: For each packet, it loops through all rules to find matches.
As the number of rules increases, the time to check each packet grows roughly in proportion.
| Input Size (n rules) | Approx. Operations per Packet |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The time grows linearly as more rules are added.
Time Complexity: O(n)
This means the time to process network rules grows directly with the number of rules.
[X] Wrong: "Adding more rules won't affect performance much because checks happen instantly."
[OK] Correct: Each packet must be checked against every rule, so more rules mean more checks and more time.
Understanding how rule checks scale helps you design efficient network policies and troubleshoot performance issues confidently.
"What if Kubernetes used a hash map to organize rules by pod labels? How would the time complexity change?"