TCP/UDP Load Balancer (Layer 4) in GCP - Time & Space Complexity
When using a TCP/UDP Load Balancer, it's important to understand how the number of connections affects processing time.
We want to know how the load balancer handles more incoming network requests as they increase.
Analyze the time complexity of the load balancer forwarding connections.
// Create a TCP/UDP Load Balancer
resource "google_compute_forwarding_rule" "tcp_udp_lb" {
name = "tcp-udp-lb"
load_balancing_scheme = "EXTERNAL"
port_range = "80-90"
target = google_compute_target_pool.tp.id
ip_protocol = "TCP"
}
resource "google_compute_target_pool" "tp" {
name = "target-pool"
instances = ["instance-1", "instance-2"]
}
This setup forwards TCP connections from clients to a pool of backend instances.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Forwarding each incoming TCP/UDP connection to a backend instance.
- How many times: Once per incoming connection.
As the number of incoming connections grows, the load balancer forwards each one individually.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 forwarding operations |
| 100 | 100 forwarding operations |
| 1000 | 1000 forwarding operations |
Pattern observation: The number of forwarding operations grows directly with the number of connections.
Time Complexity: O(n)
This means the load balancer handles each connection one by one, so more connections mean proportionally more work.
[X] Wrong: "The load balancer processes all connections at once, so time stays the same no matter how many connections arrive."
[OK] Correct: Each connection requires separate handling and forwarding, so total work grows with the number of connections.
Understanding how load balancers scale with connections helps you design systems that handle traffic smoothly and predict performance.
"What if the load balancer used connection multiplexing to handle multiple connections together? How would the time complexity change?"