Host-based routing in Kubernetes - Time & Space Complexity
We want to understand how the time to route requests grows as we add more hosts in Kubernetes.
How does the system handle more hosts and how does that affect routing speed?
Analyze the time complexity of the following Kubernetes Ingress configuration snippet.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: host1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- host: host2.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service2
port:
number: 80
This snippet routes requests to different services based on the host name in the request.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking the host name against each rule in the Ingress rules list.
- How many times: Once per incoming request, the system compares the request host to each host rule until it finds a match.
As the number of host rules increases, the system checks more hosts to find the right one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 hosts | Up to 10 host checks per request |
| 100 hosts | Up to 100 host checks per request |
| 1000 hosts | Up to 1000 host checks per request |
Pattern observation: The number of checks grows linearly with the number of hosts.
Time Complexity: O(n)
This means the routing time grows directly in proportion to the number of host rules.
[X] Wrong: "Routing time stays the same no matter how many hosts are added."
[OK] Correct: Each new host adds another check, so more hosts mean more work to find the right route.
Understanding how routing scales helps you design systems that stay fast as they grow. This skill shows you can think about real-world system behavior.
What if the routing used a hash map to find hosts instead of checking each one? How would the time complexity change?