Ingress controllers (Nginx, Traefik) in Kubernetes - Time & Space Complexity
We want to understand how the work done by an Ingress controller grows as the number of incoming requests increases.
Specifically, how does processing time change when more requests or routes are handled?
Analyze the time complexity of this simplified Ingress controller request handling snippet.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
This Ingress defines two paths that the controller must check to route requests correctly.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking each incoming request against the list of defined paths.
- How many times: For each request, the controller checks paths one by one until it finds a match.
As the number of requests or paths grows, the controller does more checks per request.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 requests, 2 paths | About 20 path checks |
| 100 requests, 10 paths | About 1,000 path checks |
| 1000 requests, 100 paths | About 100,000 path checks |
Pattern observation: The total work grows roughly by multiplying requests and paths checked.
Time Complexity: O(r * p)
This means the time to process requests grows proportionally to the number of requests times the number of paths to check.
[X] Wrong: "The controller checks only one path per request, so time grows only with requests."
[OK] Correct: The controller must check paths in order until it finds a match, so more paths mean more checks per request.
Understanding how request routing scales helps you explain system behavior clearly and shows you can think about performance in real setups.
"What if the Ingress controller used a hash map to find paths instead of checking each one? How would the time complexity change?"