curl for HTTP requests in Linux CLI - Time & Space Complexity
When using curl to make HTTP requests, it is helpful to understand how the time it takes grows as you make more requests or handle larger responses.
We want to know how the total work changes when the number or size of requests changes.
Analyze the time complexity of the following curl command used in a script.
for url in $(cat urls.txt); do
curl -s "$url" -o /dev/null
echo "Fetched $url"
done
This script reads a list of URLs from a file and fetches each one silently, discarding the output.
Look for repeated actions that take time.
- Primary operation: The
curlcommand fetching each URL. - How many times: Once for each URL in the list.
As the number of URLs grows, the total time grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 HTTP requests |
| 100 | 100 HTTP requests |
| 1000 | 1000 HTTP requests |
Pattern observation: Doubling the number of URLs doubles the total requests and time.
Time Complexity: O(n)
This means the total time grows linearly with the number of URLs you fetch.
[X] Wrong: "Fetching multiple URLs with curl happens instantly or all at once."
[OK] Correct: Each curl call waits for the server response before moving on, so time adds up with each request.
Understanding how repeated commands like curl scale helps you write efficient scripts and shows you can think about performance in real tasks.
What if we used parallel curl requests instead of one after another? How would the time complexity change?