Upstream blocks in Nginx - Time & Space Complexity
When using nginx upstream blocks, it's important to understand how the number of backend servers affects request handling time.
We want to know how the processing time grows as we add more servers to the upstream group.
Analyze the time complexity of this nginx upstream configuration snippet.
upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
This code defines a group of backend servers and proxies requests to them using a load balancing method.
In this setup, nginx selects one backend server per request.
- Primary operation: Selecting a backend server from the upstream list.
- How many times: Once per incoming request.
As the number of backend servers increases, nginx must choose one server for each request.
| Number of Servers (n) | Approx. Operations per Request |
|---|---|
| 3 | 3 checks (to pick a server) |
| 10 | 10 checks |
| 100 | 100 checks |
Pattern observation: The selection work grows linearly with the number of servers.
Time Complexity: O(n)
This means the time to select a backend server grows directly with the number of servers in the upstream block.
[X] Wrong: "Adding more servers won't affect request handling time because nginx picks instantly."
[OK] Correct: Nginx checks each server to decide where to send the request, so more servers mean more checks and longer selection time.
Understanding how nginx handles upstream servers helps you explain load balancing efficiency and scaling in real systems.
"What if nginx used a hash-based method to select servers instead of checking each one? How would the time complexity change?"