Common concurrency patterns in Go - Time & Space Complexity
When using concurrency in Go, it's important to understand how the program's work grows as more tasks run at the same time.
We want to see how the time to finish changes when we add more concurrent parts.
Analyze the time complexity of the following code snippet.
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
// simulate work
}
func main() {
var wg sync.WaitGroup
n := 5
wg.Add(n)
for i := 0; i < n; i++ {
go worker(i, &wg)
}
wg.Wait()
fmt.Println("All workers done")
}
This code runs several workers at the same time using goroutines and waits for all to finish.
- Primary operation: Starting and running
ngoroutines concurrently. - How many times: The loop runs
ntimes to launch workers.
As we increase the number of workers, the program launches more goroutines, but they run at the same time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 goroutines started, run concurrently |
| 100 | 100 goroutines started, run concurrently |
| 1000 | 1000 goroutines started, run concurrently |
Pattern observation: The number of goroutines grows linearly with input, but total time depends on how they run together.
Time Complexity: O(n)
This means the total work grows linearly with the number of concurrent tasks started.
[X] Wrong: "Running many goroutines always makes the program faster and time stays the same."
[OK] Correct: Even if goroutines run at the same time, starting many can add overhead and the actual speed depends on CPU and synchronization.
Understanding how concurrency affects time helps you explain how programs handle many tasks and manage resources efficiently.
"What if we added a channel to limit how many goroutines run at once? How would the time complexity change?"