Streaming and chunked transfer in Nginx - Time & Space Complexity
When nginx streams data using chunked transfer, it sends pieces of data as they become ready.
We want to understand how the time to send data grows as the data size increases.
Analyze the time complexity of the following nginx configuration snippet.
location /stream {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding on;
}
This config enables chunked transfer encoding to stream data from the backend server to the client in chunks.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Sending each chunk of data as it arrives.
- How many times: Once per chunk, which depends on the total data size divided by chunk size.
As the total data size grows, the number of chunks sent grows roughly in proportion.
| Input Size (n bytes) | Approx. Number of Chunks |
|---|---|
| 10 KB | Few chunks (small number) |
| 100 KB | About 10 times more chunks |
| 1 MB | About 100 times more chunks |
Pattern observation: The number of operations grows linearly with data size.
Time Complexity: O(n)
This means the time to send data grows directly in proportion to the data size.
[X] Wrong: "Chunked transfer sends all data instantly, so time does not grow with size."
[OK] Correct: Even though data is sent in chunks, each chunk takes time to send, so total time grows with total data size.
Understanding how streaming scales helps you explain real-world server behavior clearly and confidently.
"What if chunk sizes were doubled? How would the time complexity change?"