Channel Capacity Limits in Go
Channels in Go let goroutines pass data back and forth without locking. They form a foundation for concurrency because they give a structured way for values to move between different routines while keeping synchronization handled by the language. Capacity has a bigger effect than many realize. It changes how the runtime schedules work, how blocking is applied, and how memory buffers are managed under load.
Mechanics of Channel Capacity
Channel capacity is a fixed property that controls how many values can sit in a queue between a sender and a receiver. The number is decided at the moment the channel is created, and from that point forward the Go runtime treats it as a hard boundary. No resizing happens in the background, and no adjustment occurs later on. Because of this, channel behavior is highly predictable, and the effects of choosing a certain capacity are felt immediately in how goroutines interact with one another.
Capacity also affects how memory is allocated. When a buffered channel is created, a block of memory is reserved upfront to hold the exact number of slots requested. Each slot is set aside for a single value of the channel’s element type. Those slots are then managed in a way that feels invisible to the developer but is carefully tracked by the runtime.
Unbuffered Behavior
An unbuffered channel has a capacity of zero, which means it can’t hold values. Every send operation waits for a receiver, and every receive waits for a sender. Both sides must arrive at the channel around the same time to complete the transfer. This forces direct synchronization between goroutines, making unbuffered channels useful when you want to guarantee that a value is acted on immediately.
The sender in this example halts at ch <- “ready”
until the receiver pulls the value. Nothing moves until both are present. This kind of tight coordination makes sure that no values are ever queued, but it also means a slow receiver will stall the sender entirely.
Unbuffered channels are often used when ordering is just as important as delivery, since they guarantee that both goroutines are synchronized at each transfer point.
Buffered Behavior
Buffered channels change the picture. When capacity is greater than zero, the runtime creates a ring buffer in memory to hold values until they’re received. Senders can place values into the buffer until it fills, at which point further sends wait until space is freed. Receivers can read values at their own pace, as long as the buffer contains data.
This channel has room for three integers. All three sends complete without blocking, because the buffer has space for them. Whenthe buffer is full, any additional send would wait until a receive call made room.
To see blocking in action, the code can be adjusted so the receiver runs more slowly than the sender:
Now the sender quickly fills the buffer, but then pauses until the receiver consumes a value. This rhythm between fast producers and slower consumers is a direct effect of buffer size, and it shows how capacity helps regulate flow.
Internal Data Structures
The Go runtime represents each channel with a structure called hchan
, which lives in the runtime package. It contains bookkeeping fields such as the buffer pointer, the size of each element, the capacity, the current count of elements, and positions for the head and tail of the queue. It also holds lists of goroutines that are waiting to send or receive.
Although developers don’t interact with hchan
directly, understanding that it exists helps explain the mechanics. Every send or receive operation updates those internal fields under locks to prevent data races. When a sender pushes a value into a buffered channel, the runtime writes it to the memory location at the tail index, increments the count, and then moves the tail forward. Receivers do the opposite, pulling from the head index and moving it forward. Both head and tail wrap around to form a circular buffer once they reach the end.
This isn’t Go’s actual implementation, but it captures the idea of how values are tracked with head and tail pointers. The real runtime logic is more complex, since it has to deal with goroutine scheduling and memory safety, but the circular movement of head and tail makes the constant capacity possible without shifting data around in memory. I think it’s important to see this visually, even if it’s only a conceptual model, because the picture of a ring that fills up and wraps around helps make sense of why channels have a fixed size and how data flows through them.
Queues of waiting senders and receivers are also linked to the channel structure. Each goroutine that blocks on a send or receive is placed in one of these queues. When the channel state changes, the runtime checks those queues to wake the appropriate goroutines. This is how channels coordinate synchronization beyond just the ring buffer.
What Happens When Buffers Fill
A buffered channel feels smooth while values fit inside its capacity, but once the buffer reaches its maximum size the behavior changes. The Go runtime applies natural backpressure, which stops senders from racing ahead and forces receivers to catch up. This change is not hidden or delayed. It happens the moment the last slot is filled, and it directly affects which goroutines are allowed to run.
Send Blocking
When a sender encounters a full buffer, it doesn’t push values any further. The goroutine stops and is placed into a waiting list linked to that channel. Parking the goroutine means it is suspended by the scheduler and won’t run again until space in the channel opens. This gives Go a way to coordinate throughput without needing manual locks in user code.
The goroutine sending values here will stop as soon as two numbers are in the channel. It won’t resume until the main function begins draining the buffer. That pause is the runtime’s enforcement of capacity.
A second example shows multiple senders lined up on the same channel.
Only one value fits in the buffer, so all but the first sender will block until the receiver takes the slot. This queuing effect is a direct outcome of fixed capacity.
Receive Blocking
The opposite happens when a receiver tries to pull from an empty channel. If no sender is ready, the receiver is suspended and waits for data. This behavior makes channel reads predictable, because a receive either returns a value that was already buffered or halts until a sender provides one.
The message prints right away, but the line pulling from the channel won’t complete until the goroutine sends the number two seconds later. Receivers don’t burn CPU while waiting, they simply yield until a sender wakes them up.
This waiting mechanism is what allows large systems with many idle receivers to scale without wasting resources.
Fairness in Scheduling
Go tries to avoid favoring one side of a channel over the other. If a value is sent while a receiver is already waiting, the runtime skips the buffer and hands the value directly to the waiting goroutine. This avoids extra memory operations and makes the transfer immediate. The same shortcut works in reverse when senders are blocked and a receiver arrives.
This code doesn’t rely on buffering at all, but it shows the direct handoff in action. Both goroutines sync up on the channel, and the runtime connects them without touching any queue.
In more complex workloads where many senders and receivers are waiting, the runtime walks through its lists to give waiting goroutines their turn. The goal is to keep transfers balanced without letting one side starve the other.
Closing Channels
Closing a channel changes how both senders and receivers behave. After a channel is marked closed, no new sends are accepted. Any goroutine trying to send into a closed channel will panic immediately. Receivers, however, can continue draining any values still in the buffer. After the buffer is empty, a receive call returns the zero value of the element type along with a flag showing the channel is closed.
The loop here drains both values that were in the buffer before the channel was closed. After that, the ok
flag signals that no further values will arrive.
This mix of draining existing data and then returning zero values is handled entirely by the runtime’s channel bookkeeping. It prevents goroutines from hanging forever on a closed channel while still allowing buffered values to be processed.
Conclusion
Channel capacity in Go defines how the runtime manages storage, blocking, and scheduling around message passing. Every send and receive is tied to the fixed size set at creation, which drives how goroutines line up, pause, and resume. From unbuffered channels that enforce direct synchronization to buffered ones that hold values in a ring, the mechanics are predictable and consistent.