Structured concurrency model in Swift - Time & Space Complexity
We want to understand how the time it takes to run code changes when using Swift's structured concurrency model.
Specifically, we ask: how does adding more concurrent tasks affect the total work done?
Analyze the time complexity of the following Swift code using structured concurrency.
func fetchData(n: Int) async {
await withTaskGroup(of: Void.self) { group in
for i in 1...n {
group.addTask {
await process(i)
}
}
}
}
func process(_ value: Int) async {
// Simulate work
try? await Task.sleep(nanoseconds: 1_000_000)
}
This code runs n tasks concurrently, each doing some work asynchronously.
Look for repeated actions that affect time.
- Primary operation: launching and running n concurrent tasks.
- How many times: exactly n times, once per task.
As n grows, the number of tasks grows linearly.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 tasks launched and processed |
| 100 | 100 tasks launched and processed |
| 1000 | 1000 tasks launched and processed |
Pattern observation: The total work grows directly with n, so doubling n doubles the work.
Time Complexity: O(n)
This means the total time grows in direct proportion to the number of tasks you run.
[X] Wrong: "Because tasks run concurrently, adding more tasks doesn't increase total time."
[OK] Correct: Even if tasks run at the same time, the system still needs to start and manage each task, so total work grows with the number of tasks.
Understanding how concurrency affects time helps you explain how programs scale and manage work efficiently.
What if we changed from launching all tasks concurrently to launching them one after another? How would the time complexity change?