API interaction scripts in Bash Scripting - Time & Space Complexity
When writing scripts that talk to APIs, it is important to know how the time to finish grows as we ask for more data or make more requests.
We want to understand how the script's work changes when the number of API calls or data size changes.
Analyze the time complexity of the following code snippet.
#!/bin/bash
urls=("https://api.example.com/data1" "https://api.example.com/data2" "https://api.example.com/data3")
for url in "${urls[@]}"; do
response=$(curl -s "$url")
echo "$response" | jq '.items[]'
done
This script loops over a list of API URLs, fetches data from each using curl, and processes the returned JSON items.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping over the list of URLs and making an API call for each.
- How many times: Once per URL in the list.
As the number of URLs increases, the script makes more API calls, so the total work grows directly with the number of URLs.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 API calls and processing |
| 100 | 100 API calls and processing |
| 1000 | 1000 API calls and processing |
Pattern observation: The work grows in a straight line as the number of URLs grows.
Time Complexity: O(n)
This means the time to finish grows directly in proportion to the number of API requests made.
[X] Wrong: "The script runs in constant time because each API call is independent."
[OK] Correct: Even though calls are independent, the total time adds up with each call, so more URLs mean more total time.
Understanding how your script's time grows with input size shows you can write efficient automation and predict performance, a valuable skill in real projects.
"What if we changed the script to fetch data from each URL in parallel? How would the time complexity change?"