Keepalive connections in Nginx - Time & Space Complexity
We want to understand how the time cost changes when nginx handles multiple requests using keepalive connections.
How does keeping connections open affect the work nginx does as requests grow?
Analyze the time complexity of the following nginx configuration snippet.
upstream backend {
server backend1.example.com;
server backend2.example.com;
keepalive 16;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
This config sets up a backend pool with 16 keepalive connections to reuse for multiple client requests.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Handling each incoming client request by reusing or opening backend connections.
- How many times: Once per client request, repeated for all requests.
As the number of client requests increases, nginx maintains a pool of up to 16 idle backend connections, reusing them for multiple requests.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 connection uses, mostly reused |
| 100 | Up to 16 new connections opened, 84 using reused connections |
| 1000 | Up to 16 new connections opened, 984 using reused connections |
Pattern observation: Connection reuse helps reduce work per request but total work still grows roughly linearly with requests.
Time Complexity: O(n)
This means the work nginx does grows directly with the number of requests, even with keepalive reuse.
[X] Wrong: "Keepalive connections make nginx handle any number of requests instantly without extra work."
[OK] Correct: Even with reuse, each request still needs processing; keepalive just reduces connection setup time, not total work.
Understanding how connection reuse affects request handling shows you can think about real server efficiency, a useful skill in many roles.
"What if we increased the keepalive value from 16 to 100? How would the time complexity change?"