HTTP requests with curl in scripts in Bash Scripting - Time & Space Complexity
When running HTTP requests in a script, it is important to understand how the time to complete grows as you make more requests.
We want to know how the script's running time changes when the number of curl requests increases.
Analyze the time complexity of the following code snippet.
for url in "$@"; do
curl -s "$url" > /dev/null
echo "Fetched $url"
done
This script takes a list of URLs and fetches each one using curl, printing a message after each fetch.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The for-loop that runs curl for each URL.
- How many times: Once for each URL passed to the script.
Each additional URL adds one more curl request, so the total time grows directly with the number of URLs.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 curl requests |
| 100 | 100 curl requests |
| 1000 | 1000 curl requests |
Pattern observation: The time grows linearly as you add more URLs.
Time Complexity: O(n)
This means the total time increases in direct proportion to the number of URLs you fetch.
[X] Wrong: "Running multiple curl commands in a loop happens instantly or all at once."
[OK] Correct: Each curl command waits for the server response before moving on, so the time adds up with each request.
Understanding how loops with network calls grow helps you write efficient scripts and explain your reasoning clearly in interviews.
"What if we ran multiple curl requests in parallel instead of one after another? How would the time complexity change?"