Skip to content(if available)orjump to list(if available)

What Is Intelligence? (2024)

What Is Intelligence? (2024)

30 comments

·October 25, 2025

jandrewrogers

This discussion is not complete without a mention of Marcus Hutter’s seminal book[0] “Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability”. It provides many of the formalisms upon which metrics of intelligence are based. The gaps in current AI tech are pretty explainable in this context.

[0] https://www.hutter1.net/ai/uaibook.htm

visarga

This book lines up with a lot of what I've been thinking: the centrality of prediction, how intelligence needs distributed social structure, language as compression, why isolated systems can't crack general intelligence.

But there are real splits on substrate dependence and what actually drives the system. Can you get intelligence from pure prediction, or does it need the pressure of real consequences? And deeper: can it emerge from computational principles alone, or does it require specific environmental embeddedness?

My sense is that execution cost drives everything. You have to pay back what you spend, which forces learning and competent action. In biological or social systems you're also supporting the next generation of agents, so intelligence becomes efficient search because there's economic pressure all the way down. The social bootstrapping isn't decorative, it's structural.

I also posted yesterday a related post on HN

> What the Dumpster Teaches: https://news.ycombinator.com/item?id=45698854

aradox66

Don't "real" consequences apply for setting weights? There's an actual monetary cost to train these models, and they have to actually perform to keep getting trained. Sure it's VC spend right now and not like, biological reproduction driving the incentives ultimately, but it's not outside the same structure.

pols45

Depending on the time horizon the predictions change. So we get layers - what is going to happen in the next hour/tomorrow/next year/next 10 years/next 100 etc (and layers of compression of which language is just one) and that naturally produces contradictions which creates bounds on "intelligence".

It really is a stupid system. No one rational wants to hear that, just like no one religious wants to hear contradictions in their stories, or no one who plays chess wants to hear its a stupid game. The only thing that can be said about the chimp intelligence is it has developed a hatred of contradictions/unpredictability and lack of control unseen in trees, frogs, ants and microbes.

Stories becomes central to survive such underlying machinery. Part of the story we tell is no no we don't all have to be Kant or Einstein because we just absorb what they uncovered. So apparently the group or social structures matters. Which is another layer of pure hallucination. All social structures if they increase the prediction horizon also generate/expose themselves to more prediction errors and contradictions not less.

So again Coherence at group level is produced through story - religion will save us, the law will save us, trump will save, the jedi will save us, AI will save us etc. We then build walls and armies to protect ourselves from each others stories. Microbes don't do this. They do the opposite and have produced the krebs cycle, photosynthesis, crispr etc. No intelligence. No organization.

Our intelligence are just bubbling cauldrons at the individual and social level through which info passes and mutates. Info that survives is info that can survive that machinery. And as info explodes the coherence stabilization process is over run. Stories have to be written faster than stories can be written.

So Donald Trump is president. A product of "intelligence" and social "intelligence". Meanwhile more microbes exist than stars in the universe. No Trump or ICE or Church or data center is required to keep them alive.

If we are going to tell a story about Intelligence look to Pixar or WWE. Don't ask anyone in MIT what they think about it.

sadid

The MIT vs. WWE contrast feels like a false dichotomy. MIT represents systematic, externalized intelligence (structured, formal, reductive, predictive). WWE or Pixar represent narrative and emotional intelligence. We do need both.

Also evolution is the original information-processing engine, and humans still run on it just like microbes. The difference is just the clock speed. Our intelligence, though chaotic and unstable, operates on radically faster time and complexity scales. It's an accelerator that runs in days and months instead of generations. The instability isn’t a flaw: it’s the turbulence of the way faster adaptation.

sadid

It’s hard not to see consciousness (whatever that actually is) lurking under all this you just explained. If it’s emergent, the substrate wars might just be detail; if it’s not, maybe silicon never gets a soul.

wppick

> It has come as a shock to some AI researchers that a large neural net that predicts next words seems to produce a system with general intelligence

When I write prompts, I've stopped thinking of LLMs as just predicting a next word, and instead to think that they are a logical model built up by combining the logic of all the text they've seen. I think of the LLM as knowing that cats don't lay eggs, and when I ask it to finish the sentence "cats lay ..." It won't generate the word eggs even though eggs probably comes after lay frequently

godelski

  > It won't generate the word eggs even though eggs probably comes after lay frequently
Even a simple N-gram model won't predict "eggs". You're misunderstanding by oversimplifying.

Next token prediction is still context based. It does not depend on only the previous token, but on the previous (N-1) tokens. You have "cat" so you should get words like "down" instead of "eggs" with even a 3-gram (trigram) model.

devmor

No, your original understanding was the more correct one. There is absolutely zero logic to be found inside an LLM, other than coincidentally.

What you are seeing is a semi-randomized prediction engine. It does not "know" things, it only shows you an approximation of what a completion of its system prompt and your prompt combined would look like, when extrapolated from its training corpus.

What you've mistaken for a "logical model" is simply a large amount of repeated information. To show the difference between this and logic, you need only look at something like the "seahorse emoji" case.

Philpax

No, their revised understanding is more accurate. The model has internal representations of concepts; the seahorse emoji fails because it uses those representations and stumbles: https://vgel.me/posts/seahorse/

aeternum

Word2vec can/could also do the seahorse thing. It at least seems like there's more to what humans consider a concept than a direction in a vector space model (but maybe not).

https://www.analyticsvidhya.com/blog/2021/07/word2vec-for-wo...

krackers

> There is absolutely zero logic to be found inside an LLM

Surely trained neural networks could never develop circuits that implement actual logic via computational graphs...

https://transformer-circuits.pub/2025/attribution-graphs/met...

godelski

You're both using two different definitions of the word "logic". Both are correct usages, but have different contexts.

typon

I often wonder whether neuroscience on LLMs is harder or humans?

alyxya

Intelligence is whatever we consider ourselves capable of. It turns out that computers are increasingly able to do whatever we can do. Maybe the only thing we can do is advanced pattern matching, but we didn't think of our intelligence that way before.

vlovich123

Humans seem to be able to invent interesting questions about the unknown and then figure out how to try techniques to answer those questions and then systematically attack those questions. This is why LLMs generally can’t do unsupervised research or novel high level engineering by themselves. They’re getting closer and closer in some ways and in others they remain quite lacking.

The other thing is their inability to intelligently forget and their inability to correctly manage their own context by building their own tools (some of which is labs intentionally crippling how they build AI to avoid an AI escape).

I don’t think there’s anything novel in human intelligence as a good chunk of it does appear in more primitive forms in other animals (primates, elephants, dolphins, cepholapods). But generally our intelligence is on hyperdrive because we also have the added physical ability of written language and the capability for tool building.

johanam

the whole book is available for free here: https://whatisintelligence.antikythera.org/

dang

Thanks! we've changed the top URL to that from https://mitpress.mit.edu/9780262049955/what-is-intelligence/.

geor9e

I'm so confused why a $36.95 purchase page is a hackernews headline, especially when your link is clearly what they should have used

bsenftner

Until there is a formal and accepted definitive distinction between intelligence, comprehension, memory, and action all these opinions are just stabs in the dark. We've not defined the scene yet. We currently do not have artificial comprehension. That's what occurs sorta during training. The intelligence everyone claims to see is a pre-calculated idiot savant. If you knew it was all a pre-calculated domino cascade, would you still say it's intelligent?

visarga

Execute actions and cognition that pay back the cost of said actions, and support the next generation. No intelligence can appear outside social bootstrapping, it always needs someone pay the initial costs. So the cost of execution drives a need for efficiency, which is intelligence.

bsenftner

Current AIs cannot comprehend on the fly, meaning if they are presented with data outside of their training, the reply generated will be a hallucination interpolated off the training data into unknown output. Yet, a person in possession of comprehension can go beyond their training, on the fly, and that is how humans learn. AI's cannot do that, which is critical.

jimt1234

For years, I've taken the position that intelligence is best expressed as creativity - that is, the ability to come up with something that isn't predictable based on current data. Today's "artificial intelligence" analyzes words (tokens) based on an input (prompt) to come up with an output. It's predictable. It's fast. But, imho, it lacks creativity, and therefore lacks intelligence.

One example of this I often ponder is the boxing style of Muhammad Ali, specifically punching while moving backwards. Before Ali, no one punched while moving away from their opponent. All boxing data said this was a weak position, time for defense, not for punching (offense). Ali flipped it. He used to do miles of roadwork, throwing punches while running backwards to train himself on this style. People thought he was crazy, but it worked, and, imho, it was extremely creative (in the context of boxing), and therefore intelligent.

Did data exist that could've been analyzed (by an AI system) to come up with this boxing style? Perhaps. Kung Fu fighting styles have long known about using your opponents momentum against them. However, I think that data (Kung Fu fighting styles) would've been diluted and ignored in face of the mountains of traditional boxing style data, that all said not to punch while moving backwards.

godelski

There's lots of opinions on what is intelligence but I notice a lot of people do not read much about it. You don't have to agree with others, but there is a reason that a precise and formal definition has been so hard to develop. People offer many simple explanations, yet if it was simple, we'd have the definition. All you end up doing is blocking yourself from learning even more.

I'll also add that a lot of people really binarize things. Although there is not a precise and formal definition, that does not mean there aren't useful ones and ones that are being refined. Progress has been made in not only the last millennia, but the last hundred years, and even the last decade. I'm not sure why so many are quick to be dismissive. The definition of life has issues and people are not so passionate about saying it is just a stab in the dark. Let your passion to criticize something be proportional to your passion to learn about that subject. Complaints are easy, but complaints aren't critiques.

That said, there's a lot of work in animal intelligence and neuroscience that sheds a lot of light on the subject. Especially in primate intelligence. There's so many mysteries here and subtle things that have surprising amounts of depth. It really is worth exploring. Frans de Waal has some fascinating books on Chimps. And hey, part of what is so interesting is that you have to take a deep look at yourself and how others view you. Take for example you reading this text. Bread it down, to atomic units. You'll probably be surprised at how complicated it is. Do you have a parallel process vocalizing my words? Do you have a parallel process spawning responses or quips? What is generating those? What are the biases? Such a simple every thing requires some pretty sophisticated software. If you really think you could write that program I think you're probably fooling yourself. But hey, maybe you're just more intelligent than me (or maybe less, since that too is another way to achieve the same outcome lol).

Razengan

Not the best place to ask that :)

analog8374

Sharing the same reality-narrative as me. Behaving in a way that expresses that in a way that is intelligible to me.

pkoird

That would be something that is intelligent to you. I believe the author (or anyone in general) should be focused on mining what intelligence objectively is.

Art9681

Best we will ever do is create a model of intelligence that meets some universal criteria for "good enough", but it will most certainly, never be an objective definition of intelligence since it is impossible to measure the system we exist in objectively without affecting the system itself. We will only ever have "intelligence as defined by N", but not "intelligence".

analog8374

Pah. Objective. Ain't no such thing.

null

[deleted]