Proxy timeouts in Nginx - Time & Space Complexity
When nginx acts as a proxy, it waits for responses from other servers. Analyzing time complexity helps us understand how waiting times grow when many requests happen.
We want to know how the time nginx spends waiting changes as the number of requests increases.
Analyze the time complexity of the following nginx proxy timeout settings.
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
location /api/ {
proxy_pass http://backend_server;
}
This snippet sets how long nginx waits to connect, send, and read from the backend server before giving up.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Handling each incoming request and waiting for backend response within timeout limits.
- How many times: Once per request, repeated for every request nginx receives.
As the number of requests increases, nginx handles each request independently, waiting up to the timeout for each.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Waits for up to 10-30 seconds per request, total waits scale roughly with 10 times. |
| 100 | Waits scale roughly with 100 times, as each request waits independently. |
| 1000 | Waits scale roughly with 1000 times, each request handled separately. |
Pattern observation: Total waiting time grows linearly with the number of requests.
Time Complexity: O(n)
This means the total waiting time grows directly in proportion to the number of requests nginx handles.
[X] Wrong: "Timeouts cause nginx to wait only once, no matter how many requests come in."
[OK] Correct: Each request is handled separately, so nginx waits for each one up to the timeout, making total wait time grow with request count.
Understanding how proxy timeouts affect request handling helps you explain server behavior under load, a useful skill for real-world system design and troubleshooting.
What if we changed proxy_read_timeout to a much smaller value? How would the time complexity change?