0
0
Goprogramming~15 mins

Common concurrency patterns in Go - Deep Dive

Choose your learning style9 modes available
Overview - Common concurrency patterns
What is it?
Common concurrency patterns are ways to organize multiple tasks running at the same time in a program. They help manage how these tasks communicate, share data, and avoid problems like conflicts or waiting forever. In Go, concurrency means running functions or processes simultaneously to make programs faster and more efficient. These patterns provide tested solutions to common challenges when working with many tasks at once.
Why it matters
Without concurrency patterns, programs that do many things at once can become confusing, buggy, or slow. Imagine trying to cook many dishes at the same time without a plan—things get messy and some dishes might burn or get cold. Concurrency patterns give a clear plan to handle multiple tasks safely and efficiently. This makes programs faster, more reliable, and easier to maintain, which is important for real-world apps like web servers or data processors.
Where it fits
Before learning concurrency patterns, you should understand basic Go syntax, functions, and goroutines (Go's way to run tasks concurrently). After mastering these patterns, you can explore advanced topics like synchronization primitives, context cancellation, and designing scalable distributed systems.
Mental Model
Core Idea
Concurrency patterns are like recipes that organize how multiple tasks run and talk to each other safely and efficiently.
Think of it like...
Think of a busy kitchen where several chefs work together. Each chef has a role, and they pass ingredients or dishes in an organized way to avoid bumping into each other or spoiling the food. Concurrency patterns are the kitchen's workflow plans that keep everything running smoothly.
┌───────────────┐       ┌───────────────┐       ┌───────────────┐
│   Producer    │──────▶│   Channel     │──────▶│   Consumer    │
└───────────────┘       └───────────────┘       └───────────────┘

This shows a simple pipeline pattern where data flows from a producer through a channel to a consumer.
Build-Up - 7 Steps
1
FoundationUnderstanding goroutines basics
🤔
Concept: Learn how to start multiple tasks that run at the same time using goroutines.
In Go, you can run a function concurrently by adding the keyword 'go' before it. For example: func sayHello() { fmt.Println("Hello") } func main() { go sayHello() // runs concurrently time.Sleep(time.Second) // wait so goroutine can finish } This starts 'sayHello' in a new goroutine, which runs alongside the main function.
Result
The program prints 'Hello' while the main function waits briefly to let the goroutine finish.
Understanding goroutines is the foundation of concurrency in Go because they let you run many tasks at once without complex thread management.
2
FoundationUsing channels for communication
🤔
Concept: Channels let goroutines send and receive data safely between each other.
Channels are like pipes that connect goroutines. One goroutine can send data into a channel, and another can receive it. Example: ch := make(chan string) func sender() { ch <- "data" } func receiver() { msg := <-ch fmt.Println(msg) } func main() { go sender() go receiver() time.Sleep(time.Second) } This passes the string "data" from sender to receiver.
Result
The program prints 'data' showing communication between goroutines.
Channels provide a safe way to share data without conflicts, avoiding common bugs in concurrent programs.
3
IntermediatePipeline pattern for task chaining
🤔Before reading on: do you think pipelines can improve performance by breaking tasks into stages? Commit to your answer.
Concept: Pipelines connect multiple stages where each stage processes data and passes it on, improving concurrency and clarity.
A pipeline has several goroutines connected by channels. Each goroutine does part of the work and sends results to the next. Example: func gen(nums ...int) <-chan int { out := make(chan int) go func() { for _, n := range nums { out <- n } close(out) }() return out } func sq(in <-chan int) <-chan int { out := make(chan int) go func() { for n := range in { out <- n * n } close(out) }() return out } func main() { nums := gen(2, 3, 4) squares := sq(nums) for sq := range squares { fmt.Println(sq) } } This pipeline generates numbers and squares them in separate stages.
Result
The program prints 4, 9, 16 each on a new line.
Pipelines help organize complex tasks into simple, concurrent steps that improve performance and code readability.
4
IntermediateWorker pool pattern for load balancing
🤔Before reading on: do you think worker pools limit the number of concurrent tasks? Commit to your answer.
Concept: Worker pools run a fixed number of goroutines to handle many tasks, balancing load and controlling resource use.
Instead of starting a goroutine per task, a pool has a set number of workers pulling tasks from a shared channel. Example: func worker(id int, jobs <-chan int, results chan<- int) { for j := range jobs { fmt.Printf("worker %d processing job %d\n", id, j) results <- j * 2 } } func main() { jobs := make(chan int, 5) results := make(chan int, 5) for w := 1; w <= 3; w++ { go worker(w, jobs, results) } for j := 1; j <= 5; j++ { jobs <- j } close(jobs) for a := 1; a <= 5; a++ { fmt.Println(<-results) } } This runs 3 workers to process 5 jobs concurrently.
Result
The program prints worker messages and results like 2, 4, 6, 8, 10 in any order.
Worker pools prevent resource overload by limiting concurrency and distributing work evenly.
5
IntermediateSelect statement for multiple channel waits
🤔Before reading on: do you think select waits for all channels or just one? Commit to your answer.
Concept: The select statement lets a goroutine wait on multiple channels and react to whichever is ready first.
Select works like a traffic controller for channels. Example: ch1 := make(chan string) ch2 := make(chan string) go func() { time.Sleep(500 * time.Millisecond) ch1 <- "from ch1" }() go func() { time.Sleep(300 * time.Millisecond) ch2 <- "from ch2" }() select { case msg1 := <-ch1: fmt.Println(msg1) case msg2 := <-ch2: fmt.Println(msg2) } This prints the message from the channel that sends first.
Result
The program prints 'from ch2' because ch2 sends earlier.
Select enables responsive programs that handle multiple events without blocking unnecessarily.
6
AdvancedContext pattern for cancellation control
🤔Before reading on: do you think goroutines stop automatically when main ends? Commit to your answer.
Concept: Context lets you signal multiple goroutines to stop work early, avoiding wasted effort and leaks.
Context carries cancellation signals and deadlines. Example: func worker(ctx context.Context) { for { select { case <-ctx.Done(): fmt.Println("worker stopped") return default: fmt.Println("working") time.Sleep(100 * time.Millisecond) } } } func main() { ctx, cancel := context.WithCancel(context.Background()) go worker(ctx) time.Sleep(300 * time.Millisecond) cancel() // signal worker to stop time.Sleep(100 * time.Millisecond) } This stops the worker goroutine cleanly.
Result
The program prints 'working' a few times then 'worker stopped'.
Using context prevents goroutines from running forever and leaking resources, which is critical in real applications.
7
ExpertAvoiding race conditions with sync primitives
🤔Before reading on: do you think channels alone prevent all data races? Commit to your answer.
Concept: Sometimes you need locks or atomic operations to protect shared data beyond channels.
Channels help communication but don't protect shared variables accessed by multiple goroutines. Example of a race: var counter int func increment(wg *sync.WaitGroup) { defer wg.Done() counter++ // unsafe } func main() { var wg sync.WaitGroup for i := 0; i < 1000; i++ { wg.Add(1) go increment(&wg) } wg.Wait() fmt.Println(counter) // often less than 1000 due to race } Fix with a mutex: var mu sync.Mutex func incrementSafe(wg *sync.WaitGroup) { defer wg.Done() mu.Lock() counter++ mu.Unlock() } This ensures correct counting.
Result
Without mutex, counter is incorrect; with mutex, counter is 1000.
Knowing when to use locks versus channels is key to writing correct and efficient concurrent programs.
Under the Hood
Go's runtime manages goroutines as lightweight threads multiplexed onto OS threads. Channels use internal queues and synchronization to safely pass data between goroutines without explicit locks. The scheduler switches goroutines efficiently to maximize CPU use. Select statements use runtime mechanisms to wait on multiple channels without busy waiting. Context propagates cancellation signals through a tree of goroutines. Sync primitives like mutexes use atomic CPU instructions to protect shared memory.
Why designed this way?
Go was designed to make concurrency simple and safe by providing goroutines and channels as first-class features. This avoids complex thread management and common bugs in other languages. The design favors communication over shared memory to reduce errors. Context was added to handle cancellation cleanly in networked and server applications. Sync primitives exist for cases where shared memory is unavoidable.
┌───────────────┐       ┌───────────────┐       ┌───────────────┐
│   Goroutine   │──────▶│   Channel     │──────▶│   Goroutine   │
│  Scheduler   │◀──────│  (buffered)   │◀──────│  Scheduler   │
└───────────────┘       └───────────────┘       └───────────────┘

┌───────────────┐
│   Context     │
│ Cancellation  │
└──────┬────────┘
       │
       ▼
┌───────────────┐
│ Goroutine Tree │
└───────────────┘
Myth Busters - 4 Common Misconceptions
Quick: Do channels automatically protect shared variables from race conditions? Commit to yes or no.
Common Belief:Channels always prevent race conditions because they handle communication safely.
Tap to reveal reality
Reality:Channels only protect data passed through them, not shared variables accessed outside channels.
Why it matters:Assuming channels protect all shared data leads to subtle bugs and unpredictable program behavior.
Quick: Do goroutines stop automatically when the main function ends? Commit to yes or no.
Common Belief:When main ends, all goroutines stop immediately.
Tap to reveal reality
Reality:Goroutines keep running unless the program exits or they receive a stop signal like context cancellation.
Why it matters:Ignoring this causes goroutines to leak and waste resources, leading to crashes or slowdowns.
Quick: Does using many goroutines always make a program faster? Commit to yes or no.
Common Belief:More goroutines always mean better performance.
Tap to reveal reality
Reality:Too many goroutines can cause overhead, contention, and slowdowns if not managed properly.
Why it matters:Blindly spawning goroutines wastes CPU and memory, hurting performance instead of helping.
Quick: Does the select statement wait for all channels to be ready before proceeding? Commit to yes or no.
Common Belief:Select waits until all channels have data before choosing one.
Tap to reveal reality
Reality:Select proceeds as soon as any one channel is ready, picking randomly if multiple are ready.
Why it matters:Misunderstanding select can cause logic errors and missed events in concurrent programs.
Expert Zone
1
Channels can be buffered or unbuffered, and choosing the right type affects blocking behavior and performance subtly.
2
Context cancellation propagates through goroutine trees, but improper use can cause leaks if not handled carefully.
3
Using sync primitives alongside channels requires careful design to avoid deadlocks and race conditions.
When NOT to use
Avoid concurrency patterns when tasks are simple and sequential, as concurrency adds complexity and overhead. For shared state, consider using atomic operations or specialized concurrent data structures instead of channels. In distributed systems, use message queues or event streams rather than local concurrency patterns.
Production Patterns
In real systems, pipelines are used for data processing stages, worker pools handle HTTP requests or jobs, and context manages request lifecycles and cancellations. Select is used for multiplexing IO and timers. Mutexes protect caches or counters. Combining these patterns carefully leads to scalable, maintainable concurrent applications.
Connections
Event-driven programming
Builds-on
Concurrency patterns in Go often implement event-driven ideas where tasks react to events or messages, improving responsiveness.
Operating system threads
Underlying mechanism
Goroutines are multiplexed onto OS threads, so understanding OS threads helps grasp Go's concurrency efficiency and limits.
Project management workflows
Analogous pattern
Just like concurrency patterns organize tasks and communication in code, project workflows organize people and tasks to avoid conflicts and delays.
Common Pitfalls
#1Starting goroutines without synchronization causes the main program to exit before they finish.
Wrong approach:func main() { go func() { fmt.Println("Hello") }() }
Correct approach:func main() { var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() fmt.Println("Hello") }() wg.Wait() }
Root cause:The main function exits immediately, killing all goroutines; waiting ensures they complete.
#2Accessing shared variables from multiple goroutines without locks causes race conditions.
Wrong approach:var counter int func increment() { counter++ // unsafe }
Correct approach:var mu sync.Mutex func increment() { mu.Lock() counter++ mu.Unlock() }
Root cause:Concurrent writes/read without synchronization lead to unpredictable results.
#3Closing a channel multiple times causes a panic.
Wrong approach:close(ch) close(ch) // panic
Correct approach:close(ch) // close only once
Root cause:Channels must be closed exactly once to signal no more data; closing again crashes the program.
Key Takeaways
Goroutines are lightweight threads that let you run many tasks concurrently in Go.
Channels provide safe communication between goroutines, avoiding shared memory conflicts.
Common concurrency patterns like pipelines and worker pools organize tasks for better performance and clarity.
Context helps manage cancellation and timeouts, preventing resource leaks in concurrent programs.
Understanding when to use locks versus channels is essential to avoid race conditions and deadlocks.