Least connections in Nginx - Time & Space Complexity
We want to understand how the time to pick a server grows as the number of servers increases in nginx's least connections load balancing.
How does nginx find the server with the fewest active connections efficiently?
Analyze the time complexity of this nginx configuration snippet using least connections:
upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
This config tells nginx to send each request to the server with the fewest active connections.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: nginx checks the active connections count of each server in the upstream list.
- How many times: Once per incoming request, it compares all servers to find the one with the least connections.
As the number of servers (n) grows, nginx must check each server's connection count to pick the least busy one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 comparisons |
| 100 | 100 comparisons |
| 1000 | 1000 comparisons |
Pattern observation: The number of operations grows directly with the number of servers.
Time Complexity: O(n)
This means the time to pick a server grows linearly as the number of servers increases.
[X] Wrong: "nginx instantly knows the least connected server without checking all servers."
[OK] Correct: nginx must compare all servers' connection counts each time to find the least busy one, so it takes longer with more servers.
Understanding how load balancers pick servers helps you explain real-world system behavior and shows you can think about efficiency in infrastructure.
"What if nginx used a priority queue to track servers by active connections? How would the time complexity change?"