Testing async code in Swift - Time & Space Complexity
When testing async code, we want to know how the time to complete grows as the tasks increase.
We ask: How does waiting for asynchronous tasks affect overall execution time?
Analyze the time complexity of the following async code snippet.
func fetchData(ids: [Int]) async {
for id in ids {
let data = await fetchFromNetwork(id: id)
print(data)
}
}
func fetchFromNetwork(id: Int) async -> String {
// Simulates network delay
try? await Task.sleep(nanoseconds: 1_000_000_000)
return "Data for \(id)"
}
This code fetches data for each id one after another, waiting for each network call to finish before starting the next.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The for-loop that calls
fetchFromNetworkfor each id. - How many times: Once for each element in the input array
ids.
Each network call waits about 1 second before returning. Since calls happen one after another, total time adds up.
| Input Size (n) | Approx. Operations (seconds) |
|---|---|
| 10 | ~10 seconds |
| 100 | ~100 seconds |
| 1000 | ~1000 seconds |
Pattern observation: Time grows linearly as the number of ids increases because each call waits fully before the next starts.
Time Complexity: O(n)
This means the total time grows directly in proportion to the number of async calls made one after another.
[X] Wrong: "Async calls always run at the same time, so total time stays constant no matter how many calls."
[OK] Correct: In this code, calls happen one after another because of the await inside the loop, so times add up instead of overlapping.
Understanding how async code runs step-by-step helps you explain performance clearly and shows you can reason about real-world delays and waiting times.
What if we changed the code to start all fetches at once and then wait for all to finish? How would the time complexity change?