Worker Pool Pattern in Go: What It Is and How It Works
worker pool pattern in Go is a way to manage multiple goroutines (workers) that process tasks from a shared job queue concurrently. It helps control the number of active workers to efficiently use resources and avoid overload.How It Works
Imagine you have many tasks to do, like packing boxes in a warehouse. Instead of one person doing all the work, you have a team of workers. Each worker picks a box from a shared pile and packs it. When done, they pick the next box until all are packed.
In Go, the worker pool pattern works the same way. You create a fixed number of goroutines (workers) that wait for tasks on a shared channel (job queue). Each worker takes a task, processes it, and then waits for the next one. This setup balances the workload and prevents creating too many goroutines that could slow down the program.
Example
This example shows a worker pool with 3 workers processing 5 jobs. Each job prints its number and the worker handling it.
package main import ( "fmt" "sync" "time" ) func worker(id int, jobs <-chan int, wg *sync.WaitGroup) { defer wg.Done() for j := range jobs { fmt.Printf("Worker %d started job %d\n", id, j) time.Sleep(time.Second) // simulate work fmt.Printf("Worker %d finished job %d\n", id, j) } } func main() { jobs := make(chan int, 5) var wg sync.WaitGroup // Start 3 workers for w := 1; w <= 3; w++ { wg.Add(1) go worker(w, jobs, &wg) } // Send 5 jobs for j := 1; j <= 5; j++ { jobs <- j } close(jobs) // no more jobs wg.Wait() // wait for all workers to finish }
When to Use
Use the worker pool pattern when you have many tasks to process concurrently but want to limit how many run at the same time. This helps avoid using too much memory or CPU.
Common cases include handling web requests, processing files, or running background jobs where tasks can be done independently but need controlled concurrency.
Key Points
- Worker pool limits the number of active goroutines to control resource use.
- Workers receive tasks from a shared channel and process them independently.
- It improves efficiency and prevents overload in concurrent programs.
- Use synchronization like
sync.WaitGroupto wait for all workers to finish.