Skip to content(if available)orjump to list(if available)

How the Brain Parses Language

How the Brain Parses Language

19 comments

·December 8, 2025

adamzwasserman

There's an interesting falsifiable prediction lurking here. If the language network is essentially a parser/decoder that exploits statistical regularities in language structure, then languages with richer morphological marking (more redundant grammatical signals) should be "easier" to parse — the structure is more explicitly marked in the signal itself.

French has obligatory subject-verb agreement, gender marking on articles/adjectives, and rich verbal morphology. English has largely shed these. If you trained identical neural networks on French vs English corpora, holding everything else constant, you might expect French models to hit certain capability thresholds earlier — not because of anything about the network, but because the language itself carries more redundant structural information per token.

This would support Fedorenko's view that the language network is revealing structure already present in language, rather than constructing it. The "LLM in your head" isn't doing the thinking — it's a lookup/decode system optimized for whatever linguistic code you learned.

(Disclosure: I'm running this exact experiment. Preregistration: https://osf.io/sj48b)

liampulles

What I'm curious about is what the language parts of the human brain look like for babies and toddlers. Humans obviously have a bunch of languages they can speak, and toddlers pick up the language that their guardians speak around their home, so there seems to be machinery there that is for the task of "online" learning.

Anon84

Me too! Babies and toddlers brains are like sponges. We started teaching my baby 3 languages since birth (essentially I always spoken with her in my native language, my wife in hers and gets English from living in the US). She’s not even 4 yet an fully fluent in all three and seemlessly jumps back and forth between them. (To my surprise, she doesn’t mix words from the different languages in the same sentence)

lapcat

I think this quote may speak to the question:

> The brain’s general object-recognition machinery is at the same level of abstractness as the language network. It’s not so different from some higher-level visual areas such as the inferotemporal cortex (opens a new tab) storing bits of object shapes, or the fusiform face area storing a basic face template.

In other words, it sounds like the brain may start with the same basic methods of pattern matching for many different contexts, but then different areas of the brain specialize in looking for patterns in specific contexts such as vision or language.

This seems to align with the research of Jenny Saffran, for example, who has studied how babies recognize language, arguing that this is largely statistical pattern matching.

mullsork

In the series Babies by Netflix some of her research on this topic is covered. Season 1 Episode 4 "First Words."

netfortius

Every time I read something like this reminds me of Maturana (of autopoiesis fame), who was among the first scientists from where I started gaining an interest in these areas. Relevant to his view, in the area of language, is the following:

"We human beings are living systems that exist in language. This means that although we exist as human beings in language and although our cognitive domains (domains of adequate actions) as such take place in the domain of languaging, our languaging takes place through our operation as living systems. Accordingly, in what follows I shall consider what takes place in language[,] as language arises as a biological phenomenon from the operation of living systems in recurrent interactions with conservation of organization and adaptation through their co-ontogenic structural drift, and thus show language as a consequence of the same mechanism that explains the phenomena of cognition:"

alfanick

Anecdotal data, based on a sample of 1 (aka me). I'm originally Polish, but I would say my mother tongue is English. I also learned Latin as a kid/teen. Then learning any other languages is much easier, I also learned German and some Swiss German dialects. I can also do Spanish, Italian, French, Dutch, Czech, some Serbo-Croation. I think being Polish makes learning languages easy - as we have a lot of creations in Polish that do not translate easily to other languages. I think in my case it's the same part of brain that processes both human language and computer language. My brain can do another fun party trick: I never learned cyrillic, but I can read it just fine, my brain does like pattern matching and statistical analysis when reading cyrillic.

I also learned to think in hmm "concepts", and then apply a language of my choice to express them. It's a fun skill to have :) Obviously works of Chomsky are great, especially exploring if language evolves mind or is the other way around, does mind evolve language? [let's skip his rather controversial political views lately].

Tor3

I speak several languages too, though definitely not as many as you do. I'm also in the process of learning a completely new one, at an advanced age relative to when I last learned a new one (I was in my thirties then). To me, my brain most definitely doesn't process human language the way it handles computer language. It's about as different as it can get. The latter is "learning", the former is "burn patterns into the brain", and learning a language can take years, at least at this age. Computer languages? Those can be picked up in as little as a weekend, and getting proficient isn't a multi-year or decade long process. It feels totally different for me (I've been learning new computer languages at the same time as I've been trying to get up to speed with a new human language).

tcsenpai

> But what if our neurobiological reality includes a system that behaves something like an LLM?

It almost seems like we got inspiration from our brain to build neural networks!

seanmcdirmid

It isn’t clear though. Neural networks were inspired by the brain, but transformers? It is totally plausible but do we really think just in words?

dr_dshiv

> It almost sounds like you’re saying there’s essentially an LLM inside everyone’s brain. Is that what you’re saying?

>Pretty much. I think the language network is very similar in many ways to early LLMs, which learn the regularities of language and how words relate to each other. It’s not so hard to imagine, right?

Yet, completely glosses over the role of rhythm in parsing language. LLMs aren’t rhythmic at all, are they? Maybe each token production is a cycle, though… hmm…

GolDDranks

I think it's obvious that she means that it's something _like_ LLMs in some aspects. You are correct in that rhythm and intonation are very important in parsing language. (And also an important cue when learning how to parse language!) It's clear that the human language network is not like LLM in that sense. However, it _is_ a bit like an _early_ LLM (remember GPT2?) in the sense that it can produce and parse language, not that it makes much deeper sense in it.

tgv

However ... language production and perception are quite separated in our heads. There's basically no parallel to LLMs. Note that the article doesn't give any, and is extremely vague about the biological underpinnings of language.

qqxufo1

If the brain's language network is only for "packaging words" and not for actual logic or reasoning, why does writing or speaking our messy thoughts out loud suddenly make them feel more logical? Is language actually helping us think, or is it just a filter that forces our chaotic ideas into a structure we can finally understand?

moralIsYouLie

reads like a collection of HN comments by commenters who like to build "chapter 1" textbook agents using instant-noodle "training tools". "and what would be the time complexity?"

I can't do this anymore.

Al-Khwarizmi

Ev Fedorenko is a highly recognized cognitive scientist that has been studying how humans parse language for years.

Of course this doesn't mean one shouldn't question what she says (that would be an obvious authority fallacy), but I do think it's fair to say that if you want to question it, the argument should be more elaborate that "this sounds like she has no idea of the topic".

Timwi

I'm not the person you responded to, but I found the article unreadable because it kept going on about Ev’s life instead of her research. I'm sure her research is valuable and insightful, but with this style of reporting it is both inaccessible to me, and it gives me the (probably flawed) impression that her research isn't the part of her life that's supposed to be important or impressive.

lapcat

I wouldn't read too much into the LLM analogy. The interview is disappointingly short, filled with a bunch of unnecessarily tall photgraphs, and the interviewer, the one who brought up LLMs and ChatGPT and has a history of writing AI articles (https://www.quantamagazine.org/authors/john-pavlus/), almost seemed to have an agenda to contextualize the research in this way. In general, except in a hostile context such as politics, interviewees tend to be agreeable and cooperative with interviewers, which means that interviews can be steered in a predetermined way, probably for clickbait here.

In any case, there's a key disanalogy:

> Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess).

adamzwasserman

The disanalogy you quote might actually be the key insight. What if language operates at two levels, like Kahneman's System 1/2?

Level 1: Nearly autonomic — pattern-matched language that acts directly on the nervous system. Evidence: how insults land before you "process" them, how fluent speakers produce speech faster than conscious deliberation allows, and the entire body of work on hypnotic suggestion, which relies on language bypassing conscious evaluation entirely.

Level 2: The conscious formulation you describe — the translator between perception and meaning.

LLMs might be decent models of Level 1 but have nothing corresponding to Level 2. Fedorenko's "glorified parser" could be the Level 1 system.