Load balancer types comparison in GCP - Time & Space Complexity
When using different types of load balancers, it's important to understand how their work grows as more requests come in.
We want to know how the number of operations changes when traffic increases.
Analyze the time complexity of handling incoming requests with different GCP load balancers.
// Example: Using HTTP(S) Load Balancer
for each incoming request {
check URL and headers
route to backend service
backend processes request
}
// Example: Using TCP Proxy Load Balancer
for each incoming TCP connection {
route connection to backend instance
backend handles connection
}
This sequence shows how requests or connections are processed by different load balancers.
Look at what happens repeatedly as traffic grows.
- Primary operation: Routing each incoming request or connection to a backend.
- How many times: Once per request or connection received.
As the number of requests increases, the load balancer must handle more routing operations.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 routing operations |
| 100 | 100 routing operations |
| 1000 | 1000 routing operations |
Pattern observation: The number of routing operations grows directly with the number of requests.
Time Complexity: O(n)
This means the work grows linearly with the number of incoming requests or connections.
[X] Wrong: "Load balancers handle all requests instantly, so time does not grow with more traffic."
[OK] Correct: Each request still needs routing, so more requests mean more work for the load balancer.
Understanding how load balancers scale with traffic helps you design systems that stay reliable as they grow.
What if the load balancer caches routing decisions? How would that affect the time complexity?