Problems with Go channels (2016)
191 comments
·April 13, 2025anacrolix
hajile
This is almost completely down to Go's type terrible system and is more proof that Google should have improved SML/CML (StandardML/ConcurrentML) implementations/libraries rather than create a new language. They'd have a simpler and more powerful language without all the weirdness they've added on (eg, generics being elegant and simple rather than a tacked-on abomination of syntax that Go has).
hesdeadjim
Go user for ten years and I don’t know what happened, but this year I hit some internal threshold with the garbage type system, tedious data structures, and incessant error checking being 38% of the LoC. I’m hesitant to even admit what language I’m considering a full pivot to.
likeabbas
Java 21 is pretty damn nice, 25 will be even nicer.
For your own application code, you don't have to use exceptions you can write custom Result objects and force callers to pattern match on the types (and you can always wrap library/std exceptions in that result type).
Structured Concurrency looks like a banger of a feature - it's what CompletableFuture should've been.
VirtualThreads still needs a few more years for most production cases imo, but once it's there, I truly don't see a point to choose Go over Java for backend web services.
j-krieger
I like the idea behind Go, but I feel physical pain everytime I have some sort of `go mod` behaviour that is not immediately obvious. Import/Export is so easy, I still don't get how you can fuck it up.
pdimitar
I found Golang to be a gateway drug to Rust for me.
If you want strong control and very unforgiving type system with even more unforgiving memory lifetime management so you know your program can get even faster than corresponding C/C++ programs, then Rust is a no-brainer.
But I did not pick it for the speed, though that's a very welcome bonus. I picked it for the strong static typing system mostly. And I like having the choice to super-optimize my program in terms of memory and speed when I need to.
Modelling your data transformations with enums (sum types) and Result/Option was eye-opening and improved my programming skills in all other languages I am using.
myaccountonhn
OCaml is underrated IMO. It's a systems language like Go with a simple runtime but functional with a great type system and probably best error handling out of any language I've used (polymorphic variants).
chuckadams
Raku? Unison? Qi? Don't tell me it's something boring like C# ;)
thesz
As you mentioned "improvement of existing language," I'd like to mention that Haskell has green threads that most probably are lighter (stack size 1K) than goroutines (minimum stack size 2K).
Haskell also has software transactional memory where one can implement one's own channels (they are implemented [1]) and atomically synchronize between arbitrarily complex reading/sending patterns.
[1] https://hackage.haskell.org/package/stm-2.5.3.1/docs/Control...
In my not so humble opinion, Go is a library in Haskell from the very beginning.
jessekv
How about channel channels?
https://github.com/twpayne/go-pubsub/blob/master/pubsub.go#L...
__s
I'm guilty of this too https://github.com/PeerDB-io/peerdb/blob/d36da8bb2f4f6c1c821...
The inner channel is a poor man's future. Came up with this to have lua runtimes be able to process in parallel while maintaining ordering (A B C in, results of A B C out)
lanstin
I have a channel for my gRPC calls to send work to the static and lock free workers; I have a channel of channels to reuse the same channels as allocating 40k channels per second was a bit of CPU. Some days I am very pleased with this fix and some days I am ashamed of it.
sapiogram
You joke, but this is not uncommon at all among channels purists, and is the inevitable result when they try to create any kind of concurrency abstractions using channels.
Ugh... I hope I never have to work with channels again.
eikenberry
I've always thought a lot of it was due to how channels + goroutines were designed with CSP in mind, but how often do you see CSP used "in the wild"? Go channels are good for implementing CSP and can be good at similar patterns. Not that this is a big secret, if you watch all the concurrency pattern videos they made in Go's early days you get a good feeling for what they are good at. But I can only think of a handful of time I've seen those patterns in use. Though much of this is likely due to having so much of our code designed by mid-level developers because we don't value experience in this field.
politician
One nit: reflect.Select supports a dynamic set of channels. Very few programs need it though, so a rough API isn’t a bad trade-off. In my entire experience with Go, I’ve needed it once, and it worked perfectly.
lanstin
I almost always only use Channels as the data path between fixed sized pools of workers. At each point I can control if blocking or not, and my code uses all the (allocated) CPUs pretty evenly. Channels are excellent for this data flow design use case.
I have a little pain when I do a cli as the work appears during the run and it’s tricky to guarantee you exit when all the work is done and not before. Usually Ihave a sleep one second, wait for wait group, sleep one more second at the end of the CLI main. If my work doesn’t take minutes or hours to run, I generally don’t use Go.
t8sr
When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures. It turns out that using CSP in any large, complex codebase is asking for trouble, and that this is true even about projects where members of the core Go team did the CSP.
If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind. If you're really determined, you can build anything out of anything. That doesn't mean it's always a good idea.
Looking back, I'd say channels are far superior to condition variables as a synchronized cross-thread communication mechanism - when I use them these days, it's mostly for that. Locks (mutexes) are really performant and easy to understand and generally better for mutual exclusion. (It's in the name!)
catern
>If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind.
You're going to be surprised when you learn that futexes are an abstraction too, ultimately relying on this thing called "cache coherence".
And you'll be really surprised when you learn how cache coherence is implemented.
i_don_t_know
> When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures.
That sounds reasonable. From what little Erlang/Elixir code I’ve seen, the sending and receiving of messages is also hidden as an implementation detail in modules. The public interface did not expose concurrency or synchronization to callers. You might use them under the hood to implement your functionality, but it’s of no concern to callers, and you’re free to change the implementation without impacting callers.
throwawaymaths
AND because they're usually hidden as implementation detail, a consumer of your module can create simple mocks of your module (or you can provide one)
dfawcus
How large do you deem to be large in this context?
I had success in using a CSP style, with channels in many function signatures in a ~25k line codebase.
It had ~15 major types of process, probably about 30 fixed instances overall in a fixed graph, plus a dynamic sub-graph of around 5 processes per 'requested action'. So those sub-graph elements were the only parts which had to deal with tear-down, and clean up.
There were then additionally some minor types of 'process' (i.e. goroutines) within many of those major types, but they were easier to reason about as they only communicated with that major element.
Multiple requested actions could be present, so there could be multiple sets of those 5 process groups connected, but they had a maximum lifetime of a few minutes.
I only ended up using explicit mutexes in two of the major types of process. Where they happened to make most sense, and hence reduced system complexity. There were about 45 instances of the 'go' keyword.
(Updated numbers, as I'd initially misremembered/miscounted the number of major processes)
hedora
How many developers did that scale to? Code bases that I’ve seen that are written in that style are completely illegible. Once the structure of the 30 node graph falls out of the last developer’s head, it’s basically game over.
To debug stuff by reading the code, each message ends up having 30 potential destinations.
If a request involves N sequential calls, the control flow can be as bad as 30^N paths. Reading the bodies of the methods that are invoked generally doesn’t tell you which of those paths are wired up.
In some real world code I have seen, a complicated thing wires up the control flow, so recovering the graph from the source code is equivalent to the halting problem.
None of these problems apply to async/await because the compiler can statically figure out what’s being invoked, and IDE’s are generally as good at figuring that out as the compiler.
dfawcus
That was two main developers, one doing most of the code and design, the other a largely closed subset of 3 or 4 nodes. Plus three other developers co-opted for implementing some of the nodes. [1]
The problem space itself could have probably grown to twice the number of lines of code, but there wouldn't have needed to be any more developers. Possibly only the original two. The others were only added for meeting deadlines.
As to the graph, it was fixed, but not a full mesh. A set of pipelines, with no power of N issue, as the collection of places things could talk to was fixed.
A simple diagram represented the major message flow between those 30 nodes.
Testing of each node was able to be performed in isolation, so UT of each node covered most of the behaviour. The bugs were three deadlocks, one between two major nodes, one with one major node.
The logging around the trigger for the deadlock allowed the cause to be determined and fixed. The bugs arose due to time constraints having prevented an analysis of the message flows to detect the loops/locks.
So for most messages, there were a limited number of destinations, mostly two, for some 5.
For a given "request", the flow of messages to the end of the fixed graph would be passing through 3 major nodes. That then spawned the creation of the dynamic graph, with it having two major flows. One a control flow through another 3, the other a data flow through a different 3.
Within that dynamic graph there was a richer flow of messages, but the external flow from it simply had the two major paths.
Yes, reading the bodies of the methods does not inform as to the flows. One either had to read the "main" routine which built the graph, or better refer to the graph diagram and message flows in the design document.
Essentially a similar problem to dealing with "microservices", or plugable call-backs, where the structure can not easily be determined from the code alone. This is where design documentation is necessary.
However I found it easier to comprehend and work with / debug due to each node being a prodable "black box", plus having the graph of connections and message flows.
[1] Of those, only the first had any exerience with CSP or Go. The CSP expereince being with a library for C, the Go experience some minimal use a year earlier. The other developers were all new to CSP and Go. The first two developers were "senior" / "experienced".
ChrisSD
I think the two basic synchronisation primitives are atomics and thread parking. Atomics allow you to share data between two or more concurrently running threads whereas parking allows you to control which threads are running concurrently. Whatever low-level primitives the OS provides (such as futexes) is more an implementation detail.
I would tentatively make the claim that channels (in the abstract) are at heart an interface rather than a type of synchronisation per se. They can be implemented using Mutexes, pure atomics (if each message is a single integer) or any number of different ways.
Of course, any specific implementation of a channel will have trade-offs. Some more so than others.
im3w1l
To me message passing is like it's own thing. It's the most natural way of thinking about information flow in a system consisting of physically separated parts.
throwaway150
What is "20% on Go"? What is it 20% of?
darkr
At least historically, google engineers had 20% of their time to spend on projects not related to their core role
kyrra
This still exists today. For example, I am on the payments team but I have a 20% project working on protobuf. I had to get formal approval from my management chain and someone on the protobuf team. And it is tracked as part of my performance reviews. They just want to make sure I'm not building something useless that nobody wants and that I'm not just wasting the company's time.
NiloCK
Google historically allowed employees to self-direct 20% of their working time (onto any google project I think).
ramon156
I assume this means "20% of my work on go" aka 1 out of 5 work days working on golang
thomashabets2
Unlike the author, I would actually say that Go is bad. This article illustrates my frustration with Go very well, on a meta level.
Go's design consistently at every turn chose the simplest (one might say "dumbest", but I don't mean it entirely derogatory) way to do something. It was the simplest most obvious choice made by a very competent engineer. But it was entirely made in isolation, not by a language design expert.
Go designs did not actually go out and research language design. It just went with the gut feel of the designers.
But that's just it, those rules are there for a reason. It's like the rules of airplane design: Every single rule was written in blood. You toss those rules out (or don't even research them) at your own, and your user's, peril.
Go's design reminds me of Brexit, and the famous "The people of this country have had enough of experts". And like with Brexit, it's easy to give a lame catch phrase, which seems convincing and makes people go "well what's the problem with that, keeping it simple?".
Explaining just what the problem is with this "design by catchphrase" is illustrated by the article. It needs ~100 paragraphs (a quick error prone scan was 86 plus sample code) to explain just why these choices leads to a darkened room with rakes sprinkled all over it.
And this article is just about Go channels!
Go could get a 100 articles like this written about it, covering various aspects of its design. They all have the same root cause: Go's designers had enough of experts, and it takes longer to explain why something leads to bad outcomes, than to just show the catchphrase level "look at the happy path. Look at it!".
I dislike Java more than I dislike Go. But at least Java was designed, and doesn't have this particular meta-problem. When Go was made we knew better than to design languages this way.
kbolino
Go's designers were experts. They had extensive experience building programming languages and operating systems.
But they were working in a bit of a vacuum. Not only were they mostly addressing the internal needs of Google, which is a write-only shop as far as the rest of the software industry is concerned, they also didn't have broad experience across many languages, and instead had deep experience with a few languages.
emtel
Rob Pike was definitely not a PL expert and I don’t think he would claim to be. You can read his often-posted critique of C++ here: https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
In it, he seems to believe that the primary use of types in programming languages is to build hierarchies. He seems totally unfamiliar with ideas behind ML or haskell.
kbolino
Rob Pike is not a PL theoretician, but that doesn't make him not an expert in creating programming languages.
Go was the third language he played a major part in creating (predecessors are Newsqueak and Limbo), and his pedigree before Google includes extensive experience on Unix at Bell Labs. He didn't create C but he worked directly with the people who did and he likely knows it in and out. So I stand by my "deep, not broad" observation.
Ken Thompson requires no introduction, though I don't think he was involved much beyond Go's internal development. Robert Griesemer is a little more obscure, but Go wasn't his first language either.
elzbardico
My point of view is that Rob Pike is a brilliant engineer, but a little too much opinionated for my tastes.
thomashabets2
I guess we're going into the definition of the word "expert".
I don't think the word encompasses "have done it several times before, but has not actually even looked at the state of the art".
If you're a good enough engineer, you can build anything you want. That doesn't make you an expert.
I have built many websites. I'm not a web site building expert. Not even remotely.
kbolino
I think both C and Go (the former is relevant due to Thompson's involvement in both and the massive influence it had on Go) are very "practical" languages, with strict goals in mind, and which delivered on those goals very well. They also couldn't have existed without battle-tested prior experience, including B for C and Limbo for Go.
I also think it's only from the perspective of a select few, plus some purists, that the authors of Go can be considered anything other than experts. That they made some mistakes, including some borne of hubris, doesn't really diminish their expertise to me.
0x696C6961
The Brexit comparison doesn't hold water — Brexit is widely viewed as a failure, yet Go continues to gain popularity year after year. If Go were truly as bad as described, developers wouldn't consistently return to it for new projects, but clearly, they do. Its simplicity isn't a rejection of expertise; it's a practical choice that's proven itself effective in real-world scenarios.
tl
This is optics versus reality. Its goal was to address shortcomings in C++ and Java. It has replaced neither at Google and its own creators were surprised it competed with python, mostly on the value of having an easier build and deploy process.
lelanthran
If we're using "did not meet the stated goal" as a bar for success, then Java also "failed", because it was developed as an embedded systems language and only pivoted to enterprise applications after being a dismal and abject failure at the stated goal.
If Java is not a failure then neither is Go.
If Go is a failure then so is Java.
Personally I think it is inaccurate to judge a mainstream, popular and widely adopted language as a failure just because it did not meet the goal set at the initiation of the project, prior to even the first line of code getting written.
0x696C6961
Go has replaced Java and C++ in numerous other environments.
dilyevsky
> It has replaced neither
Except that it did. Just because people aren’t rewriting borg and spanner in go doesn’t mean it isnt default choice for many of infra projects. And python got completely superseded by go even during my tenure
thomashabets2
I would say this is another thing that would take quite a while to flesh out. Not only is it hard to have this conversation in text-only on hackernews, but HN will also rate limit replies, so a conversation once started cannot continue here to actually allow the discussion participants to come to an understanding of what the they all mean. Discussion will just stop once HN tells a poster "you're posting too often".
Hopefully saving this comment will work.
Go, unlike Brexit, has pivoted to become the solution to something other than its stated target. So sure, Go is not a failure. It was intended to be a systems language to replace C++, but has instead pivoted to be a "cloud language", or a replacement for Python. I would say that it's been a failure as a systems language. Especially if one tries to create something portable.
I do think that its simplicity is the rejection of the idea that there are experts out there, and/or their relevance. It's not decisions based on knowledge and rejection, but of ignorance and "scoping out" of hard problems.
Another long article could be written about the clearly not thought through use of nil pointers, especially typed vs untyped nil pointers (if that's even the term) once nil pointers (poorly) interact with interfaces.
But no, I'm not comparing the outcome of Go with Brexit. Go pivoting away from its stated goals are not the same thing as Brexiteers claiming a win from being treated better than the EU in the recent tariffs. But I do stand by my point that the decision process seems similarly expert hostile.
Go is clearly a success. It's just such a depressingly sad lost opportunity, too.
9rx
> It was intended to be a systems language to replace C++
More specifically, it was intended to replace the systems that Google wrote in C++ (read: servers). Early on, the Go team expressed happy surprise that people found utility in the language outside of that niche.
> but has instead pivoted to be a "cloud language"
I'm not sure that is really a pivot. At the heart of all the "cloud" tools it is known for is a HTTP server which serves as the basis of the control protocol, among other things. Presumably Go was chosen exactly because of it being designed for building servers. Maybe someone thought there would be more CRUD servers written in it too, but these "cloud" tools are ultimately in the same vein, not an entirely different direction.
> or a replacement for Python
I don't think you'd normally choose Go to train your ML/AI model. It has really only gunned for Python in the server realm; the very thing it was intended to be for. What was surprising to those living in an insular bubble at Google was that the rest of the world wrote their servers in Python and Ruby rather than C++ like Google – so it being picked up by the Python and Ruby crowd was unexpected to them – but not to anyone else.
0x696C6961
> I do think that its simplicity is the rejection of the idea that there are experts out there, and/or their relevance. It's not decisions based on knowledge and rejection, but of ignorance and "scoping out" of hard problems.
Ok, I'll ask the obvious question. Who are these experts and what languages have they designed?
> Another long article could be written about the clearly not thought through use of nil pointers, especially typed vs untyped nil pointers (if that's even the term) once nil pointers (poorly) interact with interfaces.
You're getting worked up about something that's hardly ever an issue in practice. I suspect that most of your criticisms are similar.
nvarsj
The creators thought that having 50% of your codebase be `if (err != nil) { ... }` was a good idea. And that channels somehow make sense in a world without pattern matching or generics. So yeah, it's a bizarrely idiosyncratic language - albeit with moments of brilliance (like structural typing).
I actually think Java is the better PL, but the worse runtime (in what world are 10s GC pauses ever acceptable). Java has an amazing standard library as well - Golang doesn't even have many basic data structures implemented. And the ones it does, like heap, are absolutely awful to use.
I really just view Golang nowadays as a nicer C with garbage collection, useful for building self contained portable binaries.
thomashabets2
I think Java made many decisions that turned out to be bad only in retrospect. In Go we knew (well, experts knew) already that the choices were bad.
Java is a child of the 90s. My full rant at https://blog.habets.se/2022/08/Java-a-fractal-of-bad-experim... :-)
cempaka
> I actually think Java is the better PL, but the worse runtime (in what world are 10s GC pauses ever acceptable).
This seems like a very odd/outdated criticism. 10s CMS full STW GCs are a thing of the past. There are low-latency GCs available for free in OpenJDK now with sub-millisecond pause times. Where I've seen the two runtimes compared (e.g. Coinbase used both langs in latency-sensitive exchange components) Java's has generally come out ahead.
Mawr
Your post is pure hot air. It would be helpful if you could provide concrete examples of aspects of Go that you consider badly designed and why.
int_19h
The intersection of nil and interfaces is basically one giant counter-intuitive footgun.
Or how append() sometimes returns a new slice and sometimes it doesn't (so if you forget to assign the result, sometimes it works and sometimes it doesn't). Which is understandable if you think about it in terms of low-level primitives, but in Go this somehow became the standard way of managing a high-level list of items.
Or that whole iota thing.
9rx
> Or that whole iota thing.
What is the whole iota thing?
For what it is, the iota design is really good. Languages like C and Typescript having having the exact same feature hidden in some weird special syntax look silly in comparison, not to mention that the weird syntax obscures what is happening. What Go has is much more clear to read and understand (which is why it gets so much grief where other languages with the same feature don't – there is no misunderstandings about what it is).
But maybe you are implying that no language should present raw enums, rather they should be hidden behind sum types? That is not an unreasonable take, but that is not a design flaw. That is a different direction. If this is what you are thinking, it doesn't fit alongside the other two which could be more intuitive without completely changing what they are.
chabska
> Go could get a 100 articles like this written about it, covering various aspects of its design
Actually... https://100go.co/
noor_z
It's very possible I'm just bad at Go but it seems to me that the result of trying to adhere to CSP in my own Go projects is the increasing use of dedicated lifecycle management channels like `shutdownChan`. Time will tell how burdensome this pattern proves to be but it's definitely not trivial to maintain now.
sapiogram
You're not bad at Go, literally everyone I know who has tried to do this has concluded it's a bad idea. Just stop using channels, there's a nice language hidden underneath the CSP cruft.
vrosas
I've found the smartest go engineer in the room is usually the one NOT using channels.
fireflash38
Is using a server context a bad idea? Though tbh using it for the cancelation is a shutdown channel in disguise hah.
anarki8
I find myself using channels in async Rust more than any other sync primitives. No more deadlock headaches. Easy to combine multiple channels in one state-keeping loop using combinators. And the dead goroutines problem described in the article doesn't exist in Rust.
tuetuopay
This article has an eerie feeling now that async rust is production grade and widely used. I do use a lot the basic pattern of `loop { select! { ... } }` that manages its own state.
And compared to the article, there's no dead coroutine, and no shared state managed by the coroutine: seeing the `NewGame` function return a `*Game` to the managed struct, this is an invitation for dumb bugs. This would be downright impossible in Rust, and coerces you in an actual CSP pattern where the interaction with the shared state is only through channels. Add a channel for exit, another for bookeeping, and you're golden.
I often have a feeling that a lot of the complaints are self-inflicted Go problems. The author briefly touches on them with the special snowflakes that are the stdlib's types. Yes, genericity is one point where channels are different, but the syntax is another one. Why on earth is a `chan <- elem` syntax necessary over `chan.Send(elem)`? This would make non-blocking versions trivial to expose and discover for users (hello Rust's `.try_send()` methods).
Oh and related to the first example of "exiting when all players left", we also see the lack of proper API for go channels: you can't query if there still are producers for the channel because gc and pointers and shared channel objetc itself and yadda. Meanwhile in rust, producers are reference-counted and the channel automatically closed when there are no more producers. The native Go channels can't do that (granted, they could, with a wrapper and dedicated sender and receiver types).
j-krieger
> I do use a lot the basic pattern of `loop { select! { ... } }` that manages its own state.
Care to show any example? I'm interested!
ninkendo
Same. It’s a pattern I’m reaching for a lot, whenever I have multiple logical things that need to run concurrently. Generally:
- A struct that represents the mutable state I’m wrapping
- A start(self) method which moves self to a tokio task running a loop reading from an mpsc::Receiver<Command> channel, and returns a Handle object which is cloneable and contains the mpsc::Sender end
- The handle can be used to send commands/requests (including one shot channels for replies)
- When the last handle is dropped, the mpsc channel is dropped and the loop ends
It basically lets me think of each logical concurrent service as being like a tcp server that accepts requests. They can call each other by holding instances of the Handle type and awaiting calls (this can still deadlock if there’s a call cycle and the handling code isn’t put on a background task… in practice I’ve never made this mistake though)
Some day I’ll maybe start using an actor framework (like Axum/etc) which formalizes this a bit more, but for now just making these types manually is simple enough.
surajrmal
The fact all goroutines are detached is the real problem imo. I find you can encounter many of the same problems in rust with overuse of detached tasks.
pornel
Channels are only problematic if they're the only tool you have in your toolbox, and you end up using them where they don't belong.
BTW, you can create a deadlock equivalent with channels if you write "wait for A, reply with B" and "wait for B, send A" logic somewhere. It's the same problem as ordering of nested locks.
j-krieger
I haven't yet used channels anywhere in Rust, but my frustration with async mutexes is growing stronger. Do you care to show any examples?
tcfhgj
async mutexes?
> Contrary to popular belief, it is ok and often preferred to use the ordinary Mutex from the standard library in asynchronous code.
> The feature that the async mutex offers over the blocking mutex is the ability to keep it locked across an .await point.
ricardobeat
Strange to go all this length without mentioning the approaches that solve the problem in that first example:
1. send a close message on the channel that stops the goroutine
2. use a Context instance - `ctx.Done()` returns a channel you can select on
Both are quite easy to grasp and implement.
sapiogram
You've misunderstood the example. The `scores` channel aggregates scores from all players, you can't close it just because one player leaves.
I'd really, really recommend that you try writing the code, like the post encourages. It's so much harder than it looks, which neatly sums up my overall experience with Go channels.
ricardobeat
In both examples, the HandlePlayer for loop only exits if .NextScore returns an error.
In both cases, you’d need to keep track of connected players to stop the game loop and teardown the Game instance. Closing the channel during that teardown is not a hurdle.
What am I missing?
guilhas
I think nothing
I was thinking the same, the only problem is the author not keeping track of players
On HandlePlayer return err you would decrement a g.players counter, or something, and in the Game.run just do if !g.hasPlayers() break close(g.scores)
The solution requires nothing special, just basic logic that should probably be there anyway
If anything this post shows that mutexes are worse, by making bad code work
null
politician
It’s not entirely clear whether the author is describing a single or multiplayer game.
Among the errors in the multiplayer case is the lack of score attribution which isn’t a bug with channels as much as it’s using an int channel when you needed a struct channel.
null
jeremyjh
The whole point is that it is multiplayer.
jtolds
Hi! No, I think you've misunderstood the assignment. The example posits that you have a "game" running, which should end when the last player leaves. While only using channels as a synchronization primitive (a la CSP), at what point do you decide the last player has left, and where and when do you call close on the channel?
taberiand
I don't think there's much trouble at all fixing the toy example by extending the message type to allow communication of the additional conditions, and I think my changes are better than the alternative of using a mutex. Have I overlooked something?
Assuming the number of players are set up front, and players can only play or leave, not join. If the expectation is that players can come and go freely and the game ends some time after all players have left, I believe this pattern can still be used with minor adjustment
(please overlook the pseudo code adjustments, I'm writing on my phone - I believe this translates reasonably into compilable Go code):
type Message struct {
exit bool
score int
reply chan bool
}
type Game struct {
bestScore int
players int // > 0
messages chan Message
}
func (g *Game) run() {
for message := range g.messages {
if message.exit {
g.players = g.players - 1;
if g.players == 0 {
return
}
continue
}
if g.bestScore < 100 && g.bestScore < message.score {
g.bestScore = message.score
}
acceptingScores := g.bestScore < 100
message.reply <- acceptingScores
}
}
func (g *Game) HandlePlayer(p Player) error {
for {
score, err := p.NextScore()
if err != nil {
g.messages <- { exit: true
}
return err
}
g.messages <- { score, reply }
if not <- reply {
g.messages <- { exit: true }
return nil
}
}
}
blablabla123
I don't think channels should be used for everything. In some cases I think it's possible to end up with very lean code. But yes, if you have a stop channel for the other stop channel it probably means you should build your code around other mechanisms.
Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)
franticgecko3
> Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)
CSP is really in the realm of formal methods. No you wouldn't formulate your server program as CSP, but if you were writing software for a medical device, perhaps.
This is the FDR4 model checker for CSP, it's a functional programming language that implements CSP semantics and may be used to assert (by exhaustion, IIRC) the correctness of your CSP model.
I believe I'm in the minority of Go developers that have studied CSP, I fell into Go by accident and only took a CSP course at university because it was interesting, however I do give credit to studying CSP for my successes with Go.
null
aflag
Naive question, can't you just have a player count alongside the best score and leave when that reaches 0?
jtolds
Adding an atomic counter is absolutely a great solution in the real world, definitely, and compare and swap or a mutex or similar totally is what you want to do. In fact, that's my point in that part of the post - you want an atomic variable or a mutex or something there. Other synchronization primitives are more useful than sticking with the CSP idea of only using channels for synchronization.
angra_mainyu
Haven't read the article but it sounds like a waitgroup would suffice.
nasretdinov
I think since a concept of channel was something new and exciting back when Go was introduced, people (including myself) tried using it everywhere they could. Over time, as you collect your experience with the tool you get better at it, and certainly for shared state management channels are rarely the best option, however there still are quite a few places where you can't do something equivalent to what channels provide easily, which is to block until you've received new data. It just so happens that those situations are quite rare in Go.
regularfry
This was 2016. Is it all still true? I know things will be backwards compatible, but I haven't kept track of what else has made it into the toolbox since then.
sapiogram
Absolutely nothing has changed at the language level, and for using channels and the `go` keyword directly, there isn't really tooling to help either.
Most experienced Golang practitioners have reached the same conclusions as this blog post: Just don't use channels, even for problems that look simple. I used Go professionally for two years, and it's by far the worst thing about the language. The number of footguns is astounding.
fpoling
The only thing that changed was Context and its support in networking and other libraries to do asynchronous cancellation. It made managing network connections with channels somewhat easier.
But in general the conclusion still stands. Channels brings unnecessarily complexity. In practice message passing with one queue per goroutine and support for priority message delivery (which one cannot implement with channels) gives better designs with less issues.
NBJack
My hot take on context is that it's secretly an anti-pattern used only because of resistance to thread locals. While I understand the desire to avoid spooky action at a distance, the fact that I have to include it in every function signature I could possibly use it in is just a bit exhausting. Given I could inadvertently spin up a new one at will also makes me a bit uneasy.
fpoling
One of the often mentioned advantages of Go thread model is that it does not color functions allowing any code to start a goroutine. But with Context needed for any code that can block that advantage is lost with ctx argument being the color.
athoscouto
Yes. See update 2 FTA for a 2019 study on go concurrency bugs. Most go devs that I know consider using higher level synchronization mechanisms the right way to go (pun intended). sync.WaitGroup and errgroup are two common used options.
mort96
Channels haven't really changed since then, unless there was some significant evolution between 2016 and ~2018 that I don't know about. 2025 Go code that uses channels looks very similar to 2018 Go code that uses channels.
regularfry
I'm also wondering about the internals though. There are a couple of places that GC and the hypothetical sufficiently-smart-compiler are called out in the article where you could think there might be improvements possible without breaking existing code.
null
codr7
Agreed, channels are overrated and overused in Go.
Like closures, channels are very flexible and can be used to implement just about anything; that doesn't mean doing so is a good idea.
I would likely reach for atomics before mutexes in the game example.
franticgecko3
I'd like to refute the 'channels are slow' part of this article.
If you run a microbenchmark which seems like what has been done, then channels look slow.
If you try the contention with thousands of goroutines on a high core count machine, there is a significant inflection point where channels start outperforming sync.Mutex
The reason is that sync.Mutex, if left to wait long enough will enter a slow code path and if memory serves, will call out to a kernel futex. The channel will not do this because the mutex that a channel is built with is exists in the go runtime - that's the special sauce the author is complaining doesn't exist but didn't try hard enough to seek it out.
Anecdotally, we have ~2m lines of Go and use channels extensively in a message passing style. We do not use channels to increment a shared number, because that's ridiculous and the author is disingenuous in their contrived example. No serious Go shop is using a channel for that.
n_u
Do you have any benchmarks for the pattern you described where channels are more efficient?
> sync.Mutex, if left to wait long enough will enter a slow code path and if memory serves, will call out to a kernel futex. The channel will not do this because the mutex that a channel is built with is exists in the go runtime
Do you have any more details about this? Why isn’t sync.Mutex implemented with that same mutex channels use?
> [we] use channels extensively in a message passing style. We do not use channels to increment a shared number
What is the rule of thumb your Go shop uses for when to use channels vs mutexes?
franticgecko3
> Do you have any benchmarks for the pattern you described where channels are more efficient?
https://go.dev/play/p/qXwMJoKxylT
go test -bench=.* -run=^$ -benchtime=1x
Since my critique of the OP is that it's a contrived example, I should mention so is this: the mutex version should be a sync.Atomic and the channel version should have one channel per goroutine if you were attempting to write a performant concurrent counter, both of those alternatives would have low or zero lock contention. In production code, I would be using sync.Atomic, of course.
On my 8c16t machine, the inflection point is around 2^14 goroutines - after which the mutex version becomes drastically slower; this is where I believe it starts frequently entering `lockSlow`. I encourage you to run this for yourself.
> Do you have any more details about this? Why isn’t sync.Mutex implemented with that same mutex channels use?
Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment.
The channel mutex: https://go.dev/src/runtime/chan.go
Is not the same mutex as a sync.Mutex: https://go.dev/src/internal/sync/mutex.go
If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.
> What is the rule of thumb your Go shop uses for when to use channels vs mutexes?
Rule of thumb: if it feels like a Kafka use case but within the bounds of the local program, it's probably a good bet.
If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.
If you are using channels to protect shared memory, and you can squint and see a badly implemented Mutex or WaitGroup or Atomic; then you shouldn't be using channels.
Channels shine where goroutines are just pulling new work from a stream of work items. At least in my line of work, that is about 80% of the cases where a synchronization primitive is used.
n_u
Thanks for the example! I'll play around with it.
> On my machine, the inflection point is around 10^14 goroutines - after which the mutex version becomes drastically slower;
How often are you reaching 10^14 goroutines accessing a shared resource on a single process in production? We mostly use short-lived small AWS spot instances so I never see anything like that.
> Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment. > If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.
Haha fair enough, I also know little about mutex implementation details. Optimized specialized tool vs generic tool feels like a reasonable first guess.
Though I wonder if you are able to use channels for more generic mutex purposes is it less efficient in those cases? I guess I'll have to do some benchmarking myself.
> If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.
I agree with your rules, I used to always use channels for single processt thread-safe queues (similar to your Kafka rule) but recently I ran into a cyclic communication pattern with a queue and eventually relented to using a Mutex. I wonder if there are other painful channel concurrency patterns lurking for me to waste time on.
chuckadams
> We do not use channels to increment a shared number, because that's ridiculous and the author is disingenuous in their contrived example. No serious Go shop is using a channel for that.
Talk about knocking down strawmen: it's a stand-in for shared state, and understanding that should be a minimum bar for serious discussion.
franticgecko3
And implying I don't understand toy examples and responding with this is apparently above the bar for serious discussion.
null
mrkeen
According to the article, channels are slow because they use mutexes under the hood. So it doesn't follow that channels are better than mutexes for large N. Or is the article wrong? Or my reasoning?
franticgecko3
I have replied to another comment with more details: the channel mutex is not the same one that sync.Mutex is using.
The article that the OP article references does not show the code for their benchmark, but I must assume it's not using a large number of goroutines.
om8
Channels are useful when they are really (rarely) needed. IMO Channel API should've been as ugly as reflect API to be considered only in extra cases.
liendolucas
Putting aside this particular topic, I'm seeing posts talking negatively about the language. I got my feet wet with Go many many years ago and for unknown reasons I never kept digging on it, so...
Is it worth learning it? What problems are best solved with it?
jtolds
Author of the post here, I really like Go! It's my favorite language! It has absolutely nailed high concurrency programming in a way that other languages' solutions make me cringe to think through (await/async are so gross and unnecessary!)
If you are intending to do something that has multiple concurrent tasks ongoing at the same time, I would definitely reach for Go (and maybe be very careful or skip entirely using channels). I also would reach for Go if you intend to work with a large group of other software engineers. Go is rigid; when I first started programming I thought I wanted maximum flexibility, but Go brings uniformity to a group of engineers' output in a way that makes the overall team much more productive IMO.
Basically, I think Go is the best choice for server-side or backend programming, with an even stronger case when you're working with a team.
liendolucas
Thanks for the tip! Will definitely take into account your insights on channels if I decide to dive into it.
jtolds
I have written channel code in the last week. It's part of the deal (especially with the context package). I'm just happy to see them restrained.
I've been using Go since 2011. One year less than the author. Channels are bad. No prioritization. No combining with other synchronisation primitives without extra goroutines. In Go, no way to select on a variable number of channels (without more goroutines). The poor type system doesn't let you improve abstractions. Basically anywhere I see a channel in most people's code particular in the public interface, I know it's going to be buggy. And I've seen so many bugs. Lots of abandoned projects are because they started with channels and never dug themselves out.
The lure to use channels is too strong for new users.
The nil and various strange shapes of channel methods aren't really a problem they're just hard for newbs.
Channels in Go should really only be used for signalling, and only if you intend to use a select. They can also act as reducers, fan out in certain cases. Very often in those cases you have a very specific buffer size, and you're still only using them to avoid adding extra goroutines and reverting to pure signalling.