How far neuroscience is from understanding brains (2023)
67 comments
·March 12, 2025intrasight
minihat
This is exactly right.
I started a PhD in 2017 studying neuroscience + ML. I thought studying the brain would help me understand ANNs better. I was wrong. Ended up applying ML to analyzing EEG, MRI and similar.
bumby
Is this because we're misapplying the analogy to ML? I.e., in an effort to communicate and understand ANNs, we "pretend" it's like a brain. Just like before, we used "file retrieval systems" to understand the brain, or electricity is like "water in a pipe", which are also wrong. Analogies often only go so far, beyond which they do more harm than good.
hnuser123456
Turns out "water in a pipe" is actually surprisingly accurate, just that the "waves" are sloshing around at half the speed of light.
beezlebroxxxxxx
What you're describing is endemic across HN (and tech, tbh). Lots of people on here "know" computers/programming/CS very well. They, naturally, tend to use analogies to computers/programming/CS when trying to explain or "think out loud" in their comments. That's fine. It's what they know. The common problem arises when people forget they're analogizing and begin to see their analogy as ontologically and conceptually identical to the thing they were making an analogy for. This requires a certain amount of ego, echo chambers, and self-valorization, so that they never have to face the actual issues with these analogies.
But as many comments here have pointed out, studying neuroscience, for example, usually makes those analogies seem painfully inadequate. The same is true in philosophy of mind, for example.
erikerikson
I'm sure that there exist people who get lost in the analogies. Practitioners are generally not confused that ANNs are simplifications of the brain. The questions are which simplifications are most relevant and whether complexities can be added that yield better results. My own research was about reintroducing absolute location. I'm standard ANNs location is relative within a graph model of the network. I'm the real brain blood vessels and other macrostructures deliver materials used to grow and modify the neurons and these affect the network based on physical location. I'm fact, by adding these back in we bypassed the XOR limitation (i.e. Minsky's result leading to back propagation). Concretely, we observed learning XOR over the inputs within a Hopfield network using Hebbian learning modulated by spatially modulated trophic factor).
desdenova
Good thing there's not a lot of ML being done in Haskell... imagine a Brain Monad tutorial.
SkyBelow
Have we hit the limit of the analogy, or have we hit a limit in our understanding? Both neural networks and actual brains have behaviors that emerge from the interactions of smaller components. Neural networks have trivial connections compared to brains, but our understanding of the emergent behaviors seems very limited. To me, this is a sign not that the analogy has reached a point of breaking down, but that our tools aren't sufficient to work on even then trivial connections. I do expect the analogy will break at some point, but I'm not sure we have reached that point yet.
morkalork
Plus the overall network architecture was evolved over millions of years, is compressed in DNA and grows from cells multiplying and self-organizing.
leptoniscool
it looks like we can now actually use brains as computers: https://corticallabs.com/
dvh
"Human brain is too big to be understood with current technology, that's why scientists use simpler organisms to study brain. Flatworm's brain has 52 neurons. We have no idea how it works."
I've read this few years ago, how far we've come?
xkcd-sucks
Just for cultural reasons, a roundworm's brain with 302 neurons (each one always in the same place spatially and topologically I might add) is likely better understood. And we don't have a great idea of how it works. But the OpenWorm simulation project seems to have current development https://github.com/orgs/openworm/repositories
damnitbuilds
Think this is bad?
Psychology is worse, much worse, and yet is used around the world to make many, many life-changing or life-ending decisions.
mvieira38
Maybe off topic: I can't for the life of me figure out how therapy isn't a scam. CBT as proposed and studied in lab settings can't really be applied in the wild like most evidence based therapists try to, and the results of therapy I come across always seem extremely underwhelming. Maybe for some cases like severe phobias there is something to be solved, but do we really need to be paying thousands of dollars a year for a therapist to say "hey, maybe if you'd invest more in hobbies, social relationships and a healthy lifestyle you would feel better"?
pocket_cheese
I think therapy can be used very strategically. Maybe you want to end a friendship and don't know how to break it to them. Or maybe you don't like the way you are being treated by someone and you want to know effective ways of setting boundaries. Therapy can be super useful in giving you the tools to confront difficult emotional/psychological situations.
mtlmtlmtlmtl
>"hey, maybe if you'd invest more in hobbies, social relationships and a healthy lifestyle you would feel better"
"Just saying" the above is not what CBT is, though. I've had a lot of CBT and other kinds of therapy, and I've never had a therapist just tell me "so don't do that", or "do more x".
It's about analysing your own thinking(cognitive) and actions/habits(behavioral) in cases where those are keeping you stuck in a miserable situation, and identifying what you can do to change them.
Sure, if you put the most inanely over-abstracted construction on it like "just live healthier and you'll feel better", it sounds meaningless, but the implicit assumption there is that's something you can just decide to do, and succeed in. That's not the case for many people, and those people can benefit from CBT.
moi2388
It generally is. People like being heard. But usually they sort themselves out during the first therapy. If that fails usually more therapy doesn’t solve anything either.
As somebody who studies psychology.. If a psychological study says A, it’s more likely to be actually B. The quality of research is much, much worse than people think.
bildung
> CBT as proposed and studied in lab settings can't really be applied in the wild like most evidence based therapists try to,
Because that part is wrong, apart from the underlying inner working being dissected in labs (the cognitive science part), the therapy part it is well studied in the wild - and it works.
> and the results of therapy I come across always seem extremely underwhelming.
So you mean anecdotes? Of course you recognize the underwhelming results - the people where it worked well are less noticable, by definition.
> Maybe for some cases like severe phobias there is something to be solved, but do we really need to be paying thousands of dollars a year
That you have to pay money has been a decision of the society you reside in. It doesn't have to be this way, and in fact it isn't in most industrialized countries.
> for a therapist to say "hey, maybe if you'd invest more in hobbies, social relationships and a healthy lifestyle you would feel better"?
That is a strawman, not CBT.
pixl97
Eh, therapy can vary widely in quality.
And peoples lives can also vary in quality. If you're raised in a nest of narcissists it is likely you'll have a very skewed view on relationships and your idea of normal could be considerably different from the average and have negative effects on your social relationships. Sometimes you just need a 3rd party to tell you that's not normal.
Coming from a family of narcissists myself they quite adamantly stuck to the idea that therapy was a total scam. Though my observing their behaviors it is my believe they are really saying "Therapy is a total scam because they didn't tell me what I wanted to hear".
keybored
What’s the practical difference wrt. psychology (whether it can solve it) between the problem of not being able to get over a phobia and the problem of not being able to apply rationally simple life advice? Why is the former “something to be solved” but the latter one is not?
fragmede
You can only get out of therapy what you put into it, so if there only problems you are bringing to your therapist are a need for investment in hobbies, social relationships, and a healthy lifestyle, then that's all you're gonna get. Now, it's entirely possible that really is all you need, but for those stuck in traumatic response loops due to eg PTSD from respressed memories of childhood, leading to, say, a string of failed relationships with toxic people who don't respect your boundaries because you didn't have a good model for them growing up, therapy can be quite useful for helping people heal from generational trauma and break the cycle. It sounds cliché to blame it all on bad reaction to things that happened in childhood, but, well, we did all have one (unless you didn't; early parentification is an acknowledged failure mode), and even if the worst was emotional stunted parents (as documented by the book Adult Children of Emotionally Immature Parents), childhood that was totally fine on the surface may not have been.
Again, of course, if you are a well adjusted adult, and no one in you life is clamoring for you to get therapy, then chances are, either you've pushed anyone and everyone away that cares for you that deeply, or you don't need it, both of which are totally possible.
As for if it's a scam? That word gets thrown around too easily these days. Do they cost a lot? Yes. Are you worth it? Also yes. That's not to say all therapist are equal, and someone that's great for you might be terrible for the next person, and some of them are just bad. (There are also those that are quite good!) As there are with all things. Going into therapy with specific goals on things you want to work on (eg irrational anger at some specific thing that's ruining your life) tends to help it not be just an ongoing extra expense like a gym membership, if that's not what you want. At the end of the day, they're stuck in the system as much as you are, and paying them also makes it feel more balanced in that they're being paid listen to you go on and on and help you with your ish. You wouldn't/shouldn't treat your friends so one-sidedly. That said, it's entirely possible to pay to see a therapist and not get anything out of it. I will admit to that as a possible failure mode, but it's a failure mode and there are certainly success stories out there of people turning their life around and becoming happy well-adjusted adults.
jajko
Which part of psychology?
To me as a layman, getting interested in what makes people tick, how different issues manifest, how bad childhood and especially missing good father figure affects people proved invaluable.
But I dont mean clinical psychology, university textbooks etc but rather easy to digest popular stuff from ie Jordan Peterson (before he got addicted and crazy).
Understanding (ex)girlfriends, colleagues, parents, politicians, everybody and especially myself. People cant shock or surprise me these days.
calepayson
To anyone interested in this article I highly recommend “The Cerebral Code” by William Calvin. (https://williamcalvin.com/bk9/index.htm)
It’s the only theory of how the brain works that I’ve come across that seems like it could be valid. Unfortunately, like other posters have already mentioned, neuroscience is incredibly complex and we just don’t have the tools to test it.
Even if it ends up being completely wrong, it’s a beautiful theory and well worth checking out!
n4r9
Thanks for the link. Have you come across Dennett's multiple drafts model of consciousness, and if so how would you say this compares with that? Scanning the intro section of your link suggests that perhaps Calvin is modelling at a lower-level, but I don't pretend to understand either very deeply.
> Even if it ends up being completely wrong, it’s a beautiful theory
Not just that, but I think it's vitally important to be able to demonstrate how much we can potentially explain. There are some people that will never accept a physicalist model, but I suspect there are many who will happily accept it once they conceive of the possibilities. At least, that's the process I went through when originally learning about evolution by natural selection.
calepayson
I love Dennett. I think he’s a n incredible philosopher, at least when it comes to biology, but philosophy always leaves me wanting for the actual mechanisms.
On a high level, I think Calvin’s theory is in the exact same vein as Dennett’s. Seems like we could sum them both up by saying the brain is paralleled and there is competition between thoughts?
What grabs me about Calvin’s work is he gets much closer to how this might work than anything else I’ve seen. Even cooler, I think he provides enough depth that we could potentially design a new architecture around his theory.
dleeftink
The cerebral code as an ensemble of signals, music. Or rather, music as metaphor of neuronal patterning. Gives another dimension to being 'in tune' with someone.
I think the clinical shift to focus on 'the individual' instead of the ensemble has overlooked the importance of inter-brain synchronisation, the process of which we are only start to grapple [0]. A feeling all to familiar to fellow musicians, and no doubt anyone involved in activities in which timing coordination is key.
[0]: https://www.sciencedirect.com/science/article/abs/pii/S01497...
calepayson
> Gives another dimension to being 'in tune' with someone.
Love this.
> I think the clinical shift to focus on 'the individual' instead of the ensemble has overlooked the importance of inter-brain synchronisation
My sense is this is also the case for our cultural story of neuroscience. We can run massive analysis of the brain and what brain regions or neurons or transmitters are being used/activated and it all boils down to something we can't conceptualize. Even the principal components seem to be too complex for us to grock.
With this barrage of complexity I feel like we have a (bad but understandable) habit of reducing to the individual. We talk about the grandmother neuron or what region x is responsible for. We harp on the cog sci of individuals and we run experiments where we try to isolate them for "confounding factors" (other people).
I keep coming back to evolutionary theory as the middle ground between this over and under simplification of cognition.
dleeftink
I cannot find the source right now, but the past 20 years has seen a marked shift from individual to ensemble brain imaging, as individual brain responses (and by extension our imaging data) do not fully reflect or capture our natural state that is often far less compartmentalised. It took some advocacy for the field to move towards multi-brain imaging, not the least due to its technical challenges.
On a deeper level, I also think the prior discrepency stems from how we perceive the world as individuals rather than a collective, our research methods implicitly reflecting this bias. In regards to synchronisation, you might find the following interesting:
"Brain Waves Synchronize when People Interact" [0]: https://www.scientificamerican.com/article/brain-waves-synch...
alok-g
Am curious, how deep or detailed is this? Is there enough material there that someone could understand and write a program out of it to simulate or make something useful? (My sense looking at the book contents is that it isn't.) Thanks.
calepayson
It provides a clearer direction than any other theory of how the brain works that I have come across. But no, it’s not going to be simple project to implement. Maybe if you’ve got a strong background in ML you could hack something together?
Still I can’t recommend it highly enough. Act I is enough to grock the concept and you can crush it in an evening.
teqsun
This is the main reason why I have skepticism towards any claims of imminent AGI.
mellosouls
I'm also sceptical but to be fair, we don't need to understand how the brain works for AGI, that's just one (obvious) path.
southernplaces7
Since we don't yet have anything remotely like AGI and at the same time, don't even really know how the brain works or what consciousness is aside from being aware that we feel it, you and nobody else really know if our path to consciousness is just one of many. For all we know it might be the only one. There could be some very big unknown unknowns in those waters.
glenstein
I would say I think those are all pretty good observations, all perfectly true and I think AGI is more plausible than it's ever been in light of what we've demonstrated via LLMs. Our understanding really is that limited but I don't take it to be a counterpoint to the prospect of AGI.
Traubenfuchs
> what consciousness is
Probably an unavoidable property that emerges from the sheer and perverse complexity of the human brain, its 100 billion connections and the unimaginable amount of interactions between them, modulated by neurotransmitters + their reuptake, length, amount, location, quality and condition of pre/post-synaptic receptors, axons and other nano structures in the brain...
Comparing this disgusting moist, fleshy and electric masterpiece of nature with something primitive like a """neuronal""" network or LLM was always ridiculous.
JackFr
Who knows? Maybe it’ll turn out like flying and birds.
Studying birds gave us some data, but mimicking them wasn’t what got us jumbo jets.
southernplaces7
Sort of a shame... I always wanted to fly in an ornithopter.
srveale
Not trying to be sassy but what definition of AGI are you using? I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks." Depending on which tasks you include and what percentage of humans you need to beat, we could be already there or maybe never will be. Several of these tests [1] have been passed, some appear reasonably tractable. Like if Boston Dynamics cared about the Coffee Test I bet they could do it this year.
[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...
calepayson
> I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks."
I think you're pointing out a bit of a chicken vs. the egg situation here.
We have no idea how intelligence works and I expect this will be the case until we create it artificially. Because we have no idea how it works, we put out a variety of metrics that don't measure intelligence but approximate something that only an intelligent thing could do (we think). Then engineers optimize their ML systems for that task, we blow by the metric, and everyone is left feeling a bit disappointed by the fact that it still doesn't feel intelligent.
Neuroscience has plenty of theories for how the brain works but lacks the ability to validate them. It's incredibly difficult to look into a working brain (not to mention deeply unethical) with the necessary spatial and temporal resolution.
I suspect we'll solve the chicken vs. egg situation when someone builds an architecture around a neuroscience theory and it feels right or neuroscientists are able to find evidence for some specific ML architecture within the brain.
srveale
I get what you're saying, but I think "boiling frog" is more applicable than "chicken v egg."
You mention that people feel disappointed by ML systems because they don't feel intelligent. But I think that's just because they emerged one step at a time, and each marginal improvement doesn't blow your socks off. Personally, I'm amazed by a system that can answer PhD level questions across all disciplines, pass the Turing Test, walk me through DIY plumbing, etc etc, all at superhuman speed. Do we need neuroscience to progress before we call these things intelligent? People are polite to ChatGPT because it triggers social cues like a human. Some, for better or worse, get in full-blown relationships with an AI. Doesn't this mean that it "feels" right, at least for some?
We already know that among humans there are different kinds of intelligence. I'm reminded of the problem with standardized testing - kids can be monkeys or fish or iguanas and we evaluate tree climbing ability. We're making the same mistake by evaluating computer intelligence using human benchmarks. Put another way: it's extremely vain to say a system needs to be human-like in order to be called intelligent. Like if aliens visited us with incomprehensibly advanced technology we'd be forced to conclude they were intelligent, despite knowing absolutely nothing about how their intelligence works. To me that's proof by (hypothetical) example that we can call something intelligent based on capability, not at all conditional on internal mechanism.
Of course that's just my two cents. Without a strict definition of AGI there's no way to achieve it, and right now everyone is free to define it how they want. I can see the argument that to define AGI you have to first understand I (heh), but I think that's putting an unfair boundary around the conversation.
prismatix
If you find this interesting, the Theories of Everything podcast with Curt Jaimungal does a good job exploring this topic. The neuroscience-centered conversations focus mostly on consciousness, but they still discuss similar problems with measuring and explaining how consciousness comes about from a collection of matter.
bena
I would think a large part of the problem is the difficulty in observing a working brain.
There are very few non-destructive ways we can get a look into a brain while it is living. And even research into ways we can look into a living brain is hindered by the fact we don't want to harm the person being observed.
I could see that making progress a lot slower
mbauman
The even bigger challenge is determining _what_ you need to observe in the first place.
As a simplistic analogy, evolutionary designs of FPGA boards can end up relying upon idiosyncratic properties of the board(s) and create circuits that "shouldn't work" based on an idealized electrical circuit model. And they may not be transferable to other boards. In other words, to "understand" some evolutionary FPGA circuits, you need to "observe" more than just the gate configurations and idealized schematic.
Brains are not FPGAs or even circuits, but I think the analogy holds. They're not _just_ idealized representations of spiking neural networks.
NoxiousPluK
I remember reading an article about this years ago - a non-transferable FPGA configuration which had some logic that should be unreachable but didn't work without it. Very faschinating but I've not been able to find it since.
nick__m
I think you want that paper "An Evolved Circuit, Intrinsic in Silicon, Entwined with Physics" available there https://cgi.cse.unsw.edu.au/~cs4601/refs/papers/es97thompson...
seejayjordan
Look up Wada Test, where we shut down each hemisphere (one at a time) of the brain, while the patient is awake, to see how their personality changes without 1/2 their brain. Also stereo EEG. 200+ sensors placed in/on/around the brain, patient stays in bed for a week measuring brain activity. Here's the kicker... neurologists can plug into the brain and send teeny bits of electricity to each of the 200+ sensors individually to see what happens (typically this is for identifying seizure spots). Wish I didn't have to know this, but my partner is very epileptic, and has an RNS device implanted in her head. We upload her brain activity daily!
Gooblebrai
According to Wikipedia the personality barely changes and the tests is mainly done to detect which side is on charge of language and memory
tim333
I agree and think the way forward is computer simulation. Figure out a simulation equivalent of a neuron and put a lot together and see how well the simulated brain portion matches the real thing. That hasn't really been possible due to lack capability of computer power but we are getting to where it could be. They have of course done stuff with "artificial neural networks" of the ChatGPT type but those are very far from biology.
DeepMind has been doing work along those lines with 'AI BRAIN' https://medium.com/@daneallist/unlocks-secrets-of-real-brain...
renewedrebecca
How do you simulate something you don't understand?
The basic problem here is that we don't understand what a neuron does, so how would we model it?
KineticLensman
> The basic problem here is that we don't understand what a neuron does, so how would we model it?
We understand lots of aspects of neurons and can model those. See the wikipedia page for some examples of well-understood principles.
null
null
jampekka
> And even research into ways we can look into a living brain is hindered by the fact we don't want to harm the person being observed.
Animal brains are harmed for research all the time. Some species, like Zebrafish can be even imaged real-time at neuronal level.
As the article says, the problem is really that there are no good foundational ideas of what are the basic operating principles of the brain.
bena
I did mention people specifically. Mammalian brains are fairly more complex than piscine brains.
People are important because we can't tell a fish to perform an action or to think of an elephant, etc. So while we can image, we can't know what correlates with a high degree of certainty.
But even in other cases, a working brain is an entirely different object than a non-working one. Finding those foundational ideas and basic operating principles because we can't test a living brain. It's almost chicken/egg. Being able to experiment on a living human brain would probably give us much better ideas of the basic operating principles. And once we know those principles, we could probably move from experimenting on living human brains.
Like, we can dick around with muscle. We can force it to perform as it were living and receiving signals. And that's partly because you can cut open a living person and watch those muscles operate. You can manipulate those muscles in a living person. You can measure things, etc. It's much harder to do that with a brain. Because it's incredibly easy to make irredeemable mistakes.
glenstein
>So while we can image, we can't know what correlates with a high degree of certainty.
I don't think I agree just on this piece, though you are making important points here. We can know, for instance, of visual or spacial awareness, reactions to certain stimuli or objects introduced in their environment, and so on.
You're not wrong on the importance of learning from active brain functions but I think we can at least make meaningful inroads with animals.
> Finding those foundational ideas and basic operating principles
I know a lot of people talk this way about brains and consciousness, that we need to first have our definitional elements before we proceed with research. But I think we often have, as the object of research, closing in on those elements, while gathering all kinds of functional and theoretical data from the mass of complicated noise that is brain structure and activity. We can build out correlations, mechanisms, and so on and close in on foundational elements.
It may even be the case that foundational organizing principles turn our understanding inside out in important ways, but I have never understood the idea that we are stuck if we can't start with such things in hand.
jampekka
Animals are quite easy to get to do all sorts of experiments, and per very similar anatomy it seems very likely that the basic fundamental operating principles are same across species. If we can't figure out how a fruit fly's nervous system works, we have no hope at figuring out the human brain.
null
curtisszmania
[dead]
imchillyb
We are just now uncovering an underlying principle of science: ‘everything is a wave.’
Neurological connective tissues and their corresponding signals may be more of a symptom of electrical wave action rather than a signaling nexus.
Everything is a wave. Everything is a probabilistic wave form with probabilistic outcomes.
Once science understands these waves and their myriad forms, we may have a greater understanding of our own wires and grey matter signaling.
guappa
We understand gravity and the 3 body problem still exists…
0x1ceb00da
We just don't have a general solution of the 3 body problem in the form of equations. Practically speaking it's a solved problem because we can simulate the motion. We could do the same with brain. That's basically what neural networks are.
willy_k
We can simulate the three body problem because all of the significant parts of the problem can be known. This is FAR from the case when it comes to the brain.
I took a break from artificial networks to study neuroscience for a couple years. We hardly understand how a single synapse works - and we have trillions of them.