1) Concurrency vs Parallelism
Concurrency structures a program so multiple tasks are in progress. Parallelism runs tasks at the same time on multiple CPU cores. Go is designed for concurrency and achieves parallelism when GOMAXPROCS > 1.
2) Speedup and Efficiency
Speedup means how much faster a job finishes with more workers. Efficiency shows how well those workers are used.
Simple example
Imagine you're running a small pizza shop. One chef can bake one pizza in 10 minutes. That's your base case - one worker, one result, 10 minutes.
- If you ask 10 chefs to bake the same pizza, it won't be done any faster - they'll just bump into each other. That's zero speedup and terrible efficiency.
- If you give each chef their own pizza, they can all bake at the same time. You'll get 10 pizzas in 10 minutes - that's 10× speedup and 100% efficiency.
The goal of parallel programming is to find work like the second case - tasks that can be done at the same time without waiting or sharing too much.
Why it matters in Go
In Go, goroutines make it easy to run many things in parallel. But just like the chefs, goroutines only help if each one has its own job to do. If they fight over shared data or wait on each other, you get little or no speedup.
Amdahl's Law & Parallel Efficiency
3) Context Switching and Scheduling
Context switching means pausing one task and resuming another. Cooperative scheduling means tasks politely take turns; preemptive scheduling means the system forces them to take turns. Each switch has a cost, so avoid creating thousands of tiny tasks that do almost nothing.
Taking turns on one CPU
Imagine two friends sharing one laptop. Only one can type at a time.
- In cooperative scheduling, each friend decides when to stop typing and hand the laptop over. This is polite but risky - if one never stops, the other never gets a turn.
- In preemptive scheduling, a timer interrupts them after a few seconds and forces a swap. Everyone gets a fair share, even if someone forgets to yield.
Go originally used mostly cooperative scheduling - goroutines had to pause or block before others could run. Since Go 1.14, the runtime can preempt long-running goroutines automatically, ensuring smoother multitasking.
Why it matters
Each switch between goroutines has a small cost. If your program makes millions of tiny goroutines that constantly switch, it wastes time managing turns instead of doing real work. The goal is to find a balance - enough parallelism to stay busy, but not so much that switching dominates.
4) Goroutines and Threads (M:N Model)
A goroutine is a lightweight, independent function that runs alongside others. Go runs many goroutines on a small number of real OS threads using its M:N scheduler, making massive concurrency possible with little overhead.
What is a goroutine?
A goroutine is Go's version of a tiny, independent worker - like a small task that runs in the background. You start one simply by writing the keyword go before a function call. Every Go program begins with one special worker called the main goroutine, which runs your main() function.
When the main goroutine finishes, the whole program ends - even if other goroutines are still running. That's why goroutines often coordinate using tools like channels or WaitGroups to make sure all work is done before exiting.
How Go runs so many tasks
Goroutines aren't real OS threads - they're much lighter. The Go runtime maps many goroutines (M) onto a smaller pool of OS threads (N). This is called the M:N model.
Think of it like a kitchen: thousands of orders (goroutines) handled by a few chefs (threads). The Go runtime is the manager that keeps assigning dishes to whichever chef is free. This keeps all CPUs busy without wasting memory.
- Each goroutine starts with a tiny stack (only a few KB) that grows automatically as needed.
- Switching between goroutines is fast because it happens in Go's runtime, not the operating system.
- This makes it possible to run thousands or even millions of concurrent goroutines efficiently.
Why it matters
Go's M:N model makes concurrency cheap and scalable. You can spin up thousands of goroutines without worrying about running out of system threads or memory.
5) Channels: Synchronous vs Asynchronous
Channels connect goroutines and let them share data safely. Unbuffered channels are synchronous (sender and receiver wait for each other). Buffered channels are asynchronous.
Unbuffered channel (synchronous)
An unbuffered channel is synchronous: the sender and receiver must both be ready at the same moment. Sending or receiving will block until the other side arrives.
package main
import "fmt"
func main() {
ch := make(chan string) // unbuffered, synchronous
// No other goroutine receives, so this blocks forever.
ch <- "hello"
fmt.Println(<-ch)
}
Buffered channel (asynchronous)
A buffered channel is asynchronous: the sender can send values even if no receiver is waiting, as long as the buffer is not full.
package main
import "fmt"
func main() {
ch := make(chan string, 1) // buffered, asynchronous
// Send works immediately - buffer has space.
ch <- "hello"
// Receive later - no deadlock.
fmt.Println(<-ch)
}
Summary
- Synchronous (unbuffered) => sender and receiver must meet at the same time.
- Asynchronous (buffered) => sender and receiver work independently, up to buffer size.
- Buffered channels synchronize only when necessary - a sender blocks only if the buffer is full, and a receiver blocks only if the buffer is empty.
- Deadlock happens if a synchronous send or receive waits forever with no partner.
6) Common Concurrency Pitfalls
Concurrency is powerful but tricky. These problems happen when goroutines block or interfere with each other in the wrong ways.
Race Condition
A race condition happens when two or more goroutines access the same data at the same time without proper synchronization. This leads to unpredictable behavior because the outcome depends on the exact timing of the goroutines.
package main
import (
"fmt"
"sync"
)
func main() {
var n int
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
for i := 0; i < 1000000; i++ {
n++ // unsynchronized
}
}()
go func() {
defer wg.Done()
for i := 0; i < 1000000; i++ {
n++ // unsynchronized
}
}()
wg.Wait()
fmt.Println("final count:", n)
}You might expect 2 million, but you'll often get less. Fixing this requires synchronization using a mutex to make sure only one goroutine modifies n at a time.
package main
import (
"fmt"
"sync"
)
func main() {
var n int
var mu sync.Mutex
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
for i := 0; i < 1000000; i++ {
mu.Lock()
n++
mu.Unlock()
}
}()
go func() {
defer wg.Done()
for i := 0; i < 1000000; i++ {
mu.Lock()
n++
mu.Unlock()
}
}()
wg.Wait()
fmt.Println("final count:", n)
}Deadlock
Deadlock happens when all goroutines are waiting for each other and none can continue. The program freezes because no one can make progress.
package main
func main() {
ch := make(chan int)
ch <- 42 // blocks forever: no one is receiving
}The program above blocks forever because there's no goroutine to receive the value. Deadlocks are common with channels or locks when the expected counterpart is missing.
package main
import "fmt"
func main() {
ch := make(chan int)
go func() {
ch <- 42 // safe: there's a receiver now
}()
fmt.Println(<-ch)
}7) Synchronization Tools in Go
Go gives you low-level tools like Mutex, Once, WaitGroup, and Cond to safely manage shared state across goroutines. But idiomatic Go prefers using channels for coordination instead of locking.
Mutex: Exclusive Access
Use a Mutex when you need one goroutine at a time to access a shared variable. This is manual locking, and prone to bugs if not handled carefully.
package main
import (
"fmt"
"sync"
)
func main() {
var mu sync.Mutex
count := 0
var wg sync.WaitGroup
wg.Add(2)
for i := 0; i < 2; i++ {
go func() {
defer wg.Done()
for j := 0; j < 1000; j++ {
mu.Lock()
count++
mu.Unlock()
}
}()
}
wg.Wait()
fmt.Println("Final count:", count)
}WaitGroup: Wait for Goroutines
A WaitGroup helps wait for multiple goroutines to finish. Use Add, Done, and Wait to synchronize. Often used with worker pools.
package main
import (
"fmt"
"sync"
)
func worker(id int, jobs <-chan int, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
fmt.Printf("worker %d processing job %d\n", id, job)
}
}
func main() {
jobs := make(chan int, 5)
var wg sync.WaitGroup
for w := 1; w <= 2; w++ {
wg.Add(1)
go worker(w, jobs, &wg)
}
for j := 1; j <= 3; j++ {
jobs <- j
}
close(jobs)
wg.Wait()
}Once: One-Time Initialization
Use sync.Once to guarantee that a function (like setup) runs only once, even across multiple goroutines. Often used for initializing singletons or config.
package main
import (
"fmt"
"sync"
)
var once sync.Once
var config string
func loadConfig() {
config = "loaded"
fmt.Println("config initialized")
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 3; i++ {
wg.Add(1)
go func() {
defer wg.Done()
once.Do(loadConfig)
fmt.Println("config:", config)
}()
}
wg.Wait()
}Cond: Wait Until Condition
A Cond (condition variable) lets goroutines wait until some shared state changes. Example: a consumer waits for data to appear in a queue.
package main
import (
"fmt"
"sync"
)
func main() {
var mu sync.Mutex
cond := sync.NewCond(&mu)
queue := []int{}
var wg sync.WaitGroup
wg.Add(1)
// consumer
go func() {
defer wg.Done()
mu.Lock()
for len(queue) == 0 {
cond.Wait() // unlocks mu, waits, then reacquires it
}
fmt.Println("Got item:", queue[0])
mu.Unlock()
}()
// producer
mu.Lock()
queue = append(queue, 42)
cond.Signal() // wake one goroutine waiting on cond
mu.Unlock()
wg.Wait()
}
Channels vs Locks
Go's philosophy: "Don’t communicate by sharing memory. Share memory by communicating." That means prefer channels to coordinate goroutines instead of using mutexes when possible.
package main
import (
"fmt"
)
func main() {
ch := make(chan int)
go func() {
ch <- 42 // send data
}()
fmt.Println(<-ch) // receive data
}