Go: Communication and Signaling Between Goroutines

The main way goroutines communicate in Go is through channels. But channels aren't the only way for goroutines to signal each other. Let's try a different approach!
Signaling
Let's say we have a goroutine that generates a random number between 1 and 100:
num := 0
go func() {
num = 1 + rand.IntN(100)
}()
And the second one checks if the number is lucky or not:
go func() {
if num%7 == 0 {
fmt.Printf("Lucky number %d!\n", num)
} else {
fmt.Printf("Unlucky number %d...\n", num)
}
}()
The second goroutine will only work correctly if the first one has already set the number. So, we need to find a way to synchronize them. For example, we can make num a channel:
var wg sync.WaitGroup
num := make(chan int, 1)
// Generates a random number from 1 to 100.
wg.Go(func() {
num <- 1 + rand.IntN(100)
})
// Checks if the number is lucky.
wg.Go(func() {
n := <-num
if n%7 == 0 {
fmt.Printf("Lucky number %d!\n", n)
} else {
fmt.Printf("Unlucky number %d...\n", n)
}
})
wg.Wait()
Unlucky number 37...
But what if we want num to be a regular number, and channels are not an option?
We can make the generator goroutine signal when a number is ready, and have the checker goroutine wait for that signal. In Go, we can do this using a condition variable, which is implemented with the sync.Cond type.
A Cond has a mutex inside it:
cond := sync.NewCond(&sync.Mutex{})
// The mutex is available through the cond.L field.
fmt.Printf("%#v\n", cond.L)
&sync.Mutex{state:0, sema:0x0}
A Cond has two methods — Wait and Signal:
Waitunlocks the mutex and suspends the goroutine until it receives a signal.Signalwakes the goroutine that is waiting onWait.- When
Waitwakes up, it locks the mutex again.
If there are multiple waiting goroutines when Signal is called, only one of them will be resumed. If there are no waiting goroutines, Signal does nothing.
To see why Cond needs to go through all this mutex trouble, check out this example:
cond := sync.NewCond(&sync.Mutex{})
num := 0
// Generates a random number from 1 to 100.
go func() {
time.Sleep(10 * time.Millisecond)
cond.L.Lock() // (1)
num = 1 + rand.IntN(100)
cond.Signal() // (2)
cond.L.Unlock()
}()
// Checks if the number is lucky.
go func() {
cond.L.Lock() // (3)
if num == 0 {
cond.Wait() // (4)
}
if num%7 == 0 {
fmt.Printf("Lucky number %d!\n", num)
} else {
fmt.Printf("Unlucky number %d...\n", num)
}
cond.L.Unlock()
}()
Both goroutines use the shared num variable, so we need to protect it with a mutex.
The checker goroutine starts by locking the cond.L mutex ➌. If the generator hasn't run yet (meaning num is 0), the goroutine calls cond.Wait() ➍ and blocks. If Wait only blocked the goroutine, the mutex would stay locked, and the generator couldn't change num ➊. That's why Wait unlocks the mutex before blocking.
The generator goroutine also starts by locking the mutex ➊. After setting the num value, the generator calls cond.Signal() ➋ to let the checker know it's ready, and then unlocks the mutex. Now, if resumed Wait ➍ did nothing, the checker goroutine would continue running. But the mutex would stay unlocked, so working with num wouldn't be safe. That's why Wait locks the mutex again after receiving the signal.
In theory, everything should work. Here's the output:
Lucky number 77!
Everything seems fine, but there's a subtle bug. When the checker goroutine wakes up after receiving a signal, the mutex is unlocked for a brief moment before Wait locks it again. Theoretically, in that short time, another goroutine could sneak in and set num to 0. The checker goroutine wouldn't notice this and would keep running, even though it's supposed to wait if num is zero.
That's why, in practice, Wait is always called inside a for loop, not inside an if statement.
Not like this:
if num == 0 {
cond.Wait()
}
But like this (note that the condition is the same as in the if statement):
for num == 0 {
cond.Wait()
}
In most cases, this for loop will work just like an if statement:
- The goroutine receives a signal and wakes up.
- It locks the mutex.
- On the next loop iteration, it checks the
numvalue. - Since
numis not 0, it exits the loop and continues.
But if another goroutine intervenes between ➊ and ➋ and sets num to zero, the goroutine will notice this at ➌ and go back to waiting. This way, it will never keep running when num is zero — which is exactly what we want.
Here's the complete example:
var wg sync.WaitGroup
cond := sync.NewCond(&sync.Mutex{})
num := 0
// Generates a random number from 1 to 100.
wg.Go(func() {
time.Sleep(10 * time.Millisecond)
cond.L.Lock()
num = 1 + rand.IntN(100)
cond.Signal()
cond.L.Unlock()
})
// Checks if the number is lucky.
wg.Go(func() {
cond.L.Lock()
for num == 0 {
cond.Wait()
}
if num%7 == 0 {
fmt.Printf("Lucky number %d!\n", num)
} else {
fmt.Printf("Unlucky number %d...\n", num)
}
cond.L.Unlock()
})
wg.Wait()
Lucky number 35!
Like other synchronization primitives, a condition variable has its own internal state. So, you should only pass it as a pointer, not by value. Even better, don't pass it at all — wrap it inside a type instead. We'll do this in the next step.
One-Time Subscription
Let's go back to the lucky numbers example:
// Generates a random number from 1 to 100.
go func() {
// ...
}()
// Checks if the number is lucky.
go func() {
// ...
}()
Let's refactor the code and create a Lucky type with Guess and Wait methods:
// Guess generates a random number and notifies
// the subscriber who's waiting with Wait.
Guess()
// Wait waits for a notification about a new number,
// then calls the subscriber's callback function.
Wait(callback func(int))
Here's the implementation:
// Lucky generates a random number.
type Lucky struct {
cond *sync.Cond
num int
}
// NewLucky creates a new Lucky.
func NewLucky() *Lucky {
l := &Lucky{}
l.cond = sync.NewCond(&sync.Mutex{})
return l
}
// Guess generates a random number and notifies
// the subscriber who's waiting with Wait.
func (l *Lucky) Guess() {
l.cond.L.Lock()
defer l.cond.L.Unlock()
l.num = 1 + rand.IntN(100)
l.cond.Signal()
}
// Wait waits for a notification about a new number,
// then calls the subscriber's callback function.
func (l *Lucky) Wait(callback func(int)) {
l.cond.L.Lock()
// Wait for a signal about a new number.
for l.num == 0 {
l.cond.Wait()
}
n := l.num
l.num = 0 // Reset for the next guess.
l.cond.L.Unlock()
callback(n)
}
Now the client code becomes simpler:
lucky := NewLucky()
var wg sync.WaitGroup
wg.Go(func() {
time.Sleep(10 * time.Millisecond)
lucky.Guess()
})
wg.Go(func() {
lucky.Wait(func(n int) {
if n%7 == 0 {
fmt.Printf("Lucky number %d!\n", n)
} else {
fmt.Printf("Unlucky number %d...\n", n)
}
})
})
wg.Wait()
Broadcasting
What if we want multiple goroutines to wait for the same signal? We can use Broadcast instead of Signal. Broadcast wakes up all waiting goroutines, not just one.
Let's modify the Lucky type to support multiple subscribers:
// Lucky generates a random number.
type Lucky struct {
cond *sync.Cond
num int
}
// NewLucky creates a new Lucky.
func NewLucky() *Lucky {
l := &Lucky{}
l.cond = sync.NewCond(&sync.Mutex{})
return l
}
// Guess generates a random number and notifies
// all subscribers who are waiting with Wait.
func (l *Lucky) Guess() {
l.cond.L.Lock()
defer l.cond.L.Unlock()
l.num = 1 + rand.IntN(100)
l.cond.Broadcast() // Wake up all waiters
}
// Wait waits for a notification about a new number,
// then calls the subscriber's callback function.
func (l *Lucky) Wait(callback func(int)) {
l.cond.L.Lock()
for l.num == 0 {
l.cond.Wait()
}
n := l.num
l.cond.L.Unlock()
callback(n)
}
Now multiple goroutines can wait for the same number:
lucky := NewLucky()
var wg sync.WaitGroup
wg.Go(func() {
time.Sleep(10 * time.Millisecond)
lucky.Guess()
})
// Multiple subscribers
for i := 0; i < 3; i++ {
wg.Go(func() {
lucky.Wait(func(n int) {
fmt.Printf("Subscriber: got number %d\n", n)
})
})
}
wg.Wait()
Broadcasting with Channels
You can also implement broadcasting using channels. Here's one approach:
type Broadcaster struct {
listeners []chan int
mu sync.Mutex
}
func NewBroadcaster() *Broadcaster {
return &Broadcaster{}
}
func (b *Broadcaster) Subscribe() <-chan int {
b.mu.Lock()
defer b.mu.Unlock()
ch := make(chan int, 1)
b.listeners = append(b.listeners, ch)
return ch
}
func (b *Broadcaster) Broadcast(value int) {
b.mu.Lock()
defer b.mu.Unlock()
for _, ch := range b.listeners {
select {
case ch <- value:
default:
}
}
}
This approach uses channels but requires managing a list of subscribers. Condition variables are often simpler for this use case.
Publish/Subscribe
The publish/subscribe pattern allows multiple subscribers to receive notifications about events. We can extend our Lucky type to support this:
type Lucky struct {
cond *sync.Cond
num int
subscribers []func(int)
}
func (l *Lucky) Subscribe(callback func(int)) {
l.cond.L.Lock()
defer l.cond.L.Unlock()
l.subscribers = append(l.subscribers, callback)
}
func (l *Lucky) Guess() {
l.cond.L.Lock()
defer l.cond.L.Unlock()
l.num = 1 + rand.IntN(100)
for _, callback := range l.subscribers {
callback(l.num)
}
l.cond.Broadcast()
}
Run Once
Sometimes you need to ensure that a piece of code runs exactly once, even if multiple goroutines try to execute it. Go provides sync.Once for this purpose.
Here's an example where we need to initialize exchange rates only once:
var rates map[string]float64
var once sync.Once
func Convert(amount float64, from, to string) float64 {
once.Do(func() {
rates = map[string]float64{
"USD": 1.0,
"EUR": 0.85,
"GBP": 0.73,
}
})
return amount * rates[from] / rates[to]
}
Even if multiple goroutines call Convert at the same time, only one will run the function, while the others will wait until it returns. This way, all calls to Convert are guaranteed to proceed only after the rates map has been filled.
sync.Once is perfect for one-time initialization or cleanup in a concurrent environment. No need to worry about data races!
Once-Functions
Besides the Once type, the sync package also includes three convenience once-functions that you might find useful.
Let's say we have the randomN function that returns a random number:
// randomN returns a random number from 1 to 10.
func randomN() int {
return 1 + rand.IntN(10)
}
And the initN function sets the n variable to a random number:
n := 0
initN := func() {
if n != 0 {
panic("n is already initialized")
}
n = randomN()
}
It's clear that calling initN more than once will cause a panic (I'm keeping it simple and not using goroutines here):
for range 10 {
initN()
}
fmt.Println(n)
panic: n is already initialized
We can fix this by wrapping initN in sync.OnceFunc. It returns a function that makes sure the code runs only once:
initOnce := sync.OnceFunc(initN)
for range 10 {
initOnce()
}
fmt.Println(n)
5
sync.OnceValue wraps a function that returns a single value (like our randomN). The first time you call the function, it runs and calculates a value. After that, every time you call it, it just returns the same value from the first call:
initN := sync.OnceValue(randomN)
for range 4 {
fmt.Print(initN(), " ")
}
fmt.Println()
7 7 7 7
sync.OnceValues does the same thing for a function that returns two values:
initNM := sync.OnceValues(func() (int, int) {
return randomN(), randomN()
})
for range 4 {
n, m := initNM()
fmt.Printf("(%d,%d) ", n, m)
}
fmt.Println()
(4,2) (4,2) (4,2) (4,2)
Here are the signatures of all the once-functions side by side for clarity:
// Calls f only once.
func (o *Once) Do(f func())
// Returns a function that calls f only once.
func OnceFunc(f func()) func()
// Returns a function that calls f only once
// and returns the value from that first call.
func OnceValue[T any](f func() T) func() T
// Returns a function that calls f only once
// and returns the pair of values from that first call.
func OnceValues[T1, T2 any](f func() (T1, T2)) func() (T1, T2)
The functions OnceFunc, OnceValue, and OnceValues are shortcuts for common ways to use the Once type. You can use them if they fit your situation, or use Once directly if they don't.
Object Pool
The last tool we'll cover is sync.Pool. It helps reuse memory instead of allocating it every time, which reduces the load on the garbage collector.
Let's say we have a program that:
- Allocates 1024 bytes.
- Does something with that memory.
- Goes back to step 1 and repeats this process many times.
It looks something like this:
func runAlloc() {
// 4 goroutines, each allocating
// and freeing 1000 buffers.
var wg sync.WaitGroup
for range 4 {
wg.Go(func() {
for range 1000 {
buf := make([]byte, 1024)
rand.Read(buf)
}
})
}
wg.Wait()
}
If we run the benchmark:
func BenchmarkAlloc(b *testing.B) {
for b.Loop() {
runAlloc()
}
}
Here's what we'll see:
BenchmarkAlloc-8 219 5392291 ns/op 4096215 B/op 4005 allocs/op
Since we're allocating a new buffer on each loop iteration, we end up with 4000 memory allocations, using a total of 4 MB of memory. Even though the garbage collector eventually frees all this memory, it's quite inefficient. Ideally, we should only need 4 buffers instead of 4000 — one for each goroutine.
That's where sync.Pool comes in handy:
func runPool() {
// Pool with 1 KB buffers.
pool := sync.Pool{
New: func() any { // (1)
// Allocate a 1 KB buffer.
buf := make([]byte, 1024)
return &buf
},
}
// 4 goroutines, each allocating
// and freeing 1000 buffers.
var wg sync.WaitGroup
for range 4 {
wg.Go(func() {
for range 1000 {
buf := pool.Get().(*[]byte) // (2)
rand.Read(*buf)
pool.Put(buf) // (3)
}
})
}
wg.Wait()
}
pool.Get ➋ takes an item from the pool. If there are no available items, it creates a new one using pool.New ➊ (which we have to define ourselves, since the pool doesn't know anything about the items it creates). pool.Put ➌ returns an item back to the pool.
When the first goroutine calls Get during the first iteration, the pool is empty, so it creates a new buffer using New. In the same way, each of the other goroutines create three more buffers. These four buffers are enough for the whole program.
Let's benchmark:
func BenchmarkPool(b *testing.B) {
for b.Loop() {
runPool()
}
}
BenchmarkPool-8 206 5266199 ns/op 5770 B/op 15 allocs/op
BenchmarkAlloc-8 219 5392291 ns/op 4096215 B/op 4005 allocs/op
The difference in memory usage is clear. Thanks to the pool, the number of allocations has dropped by two orders of magnitude. As a result, the program uses less memory and puts minimal pressure on the garbage collector.
Things to keep in mind:
Newshould return a pointer, not a value, to reduce memory copying and avoid extra allocations.- The pool has no size limit. If you start 1000 more goroutines that all call
Getat the same time, 1000 more buffers will be allocated. - After an item is returned to the pool with
Put, you shouldn't use it anymore (since another goroutine might already have taken and started using it).
sync.Pool is a pretty niche tool that isn't used very often. However, if your program works with temporary objects that can be reused (like in our example), it might come in handy.
Summary
We've covered some of the lesser-known tools in the sync package — condition variables (sync.Cond), one-time execution (sync.Once), and pools (sync.Pool):
- A condition variable notifies one or more waiting goroutines about an event. You can often use a channel instead, and that's usually the better choice.
- A one-time execution guarantees that a function runs exactly once, no matter how many goroutines call it at the same time.
- A pool lets you reuse temporary objects so you don't have to allocate memory every time.
Don't use these tools just because you know they exist. Rely on common sense.
Key points to remember:
- Condition variables require careful use with mutexes and should use
forloops, notifstatements, when waiting. - Once and once-functions provide thread-safe one-time initialization.
- Pools help reduce memory allocations and GC pressure for reusable temporary objects.
- Channels are often a simpler alternative to condition variables for signaling.
Choose the right tool for your specific use case, and always consider whether channels might be a simpler solution.






