Skip to content(if available)orjump to list(if available)

Secondary school maths showing that AI systems don't think

IgorPartola

I feel like these conversations really miss the mark: whether an LLM thinks or not is not a relevant question. It is a bit like asking “what color is an Xray?” or “what does the number 7 taste like?”

The reason I say this is because an LLM is not a complete self-contained thing if you want to compare it to a human being. It is a building block. Your brain thinks. Your prefrontal cortex however is not a complete system and if you somehow managed to extract it and wire it up to a serial terminal I suspect you’d be pretty disappointed in what it would be capable of on its own.

I want to be clear that I am not making an argument that once we hook up sensory inputs and motion outputs as well as motivations, fears, anxieties, desires, pain and pleasure centers, memory systems, sense of time, balance, fatigue, etc. to an LLM that we would get a thinking feeling conscious being. I suspect it would take something more sophisticated than an LLM. But my point is that even if an LLM was that building block, I don’t think the question of whether it is capable of thought is the right question.

causal

I love the idea of educating students on the math behind AI to demystify them. But I think it's a little weird to assert "AI is not magic and AI systems do not think. It’s just maths." Equivalent statements could be made about how human brains are not magic, just biology - yet I think we still think.

sounds

A college level approach could look at the line between Math/Science/Physics and Philosophy. One thing from the article that stood out to me was that the introduction to their approach started with a problem about classifying a traffic light. Is it red or green?

But the accompanying XY plot showed samples that overlapped or at least were ambiguous. I immediately lost a lot of my interest in their approach, because traffic lights by design are very clearly red, or green. There aren't mauve or taupe lights that the local populace laughs at and says, "yes, that's mostly red."

I like the idea of studying math by using ML examples. I'm guessing this is a first step and future education will have better examples to learn from.

qsort

I agree saying "they don't think" and leaving it at that isn't particularly useful or insightful, it's like saying "submarines don't swim" and refusing to elaborate further. It can be useful if you extend it to "they don't think like you do". Concepts like finite context windows, or the fact that the model is "frozen" and stateless, or the idea that you can transfer conversations between models are trivial if you know a bit about how LLMs work, but extremely baffling otherwise.

gowld

> Concepts like

> finite context windows

like a human has

> or the fact that the model is "frozen" and stateless,

much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context.

> or the idea that you can transfer conversations between models are trivial

because computers are better-organized than humanity.

isoprophlex

> much like a human adult.

I do hope you're able to remember what you had for lunch without incessantly repeating it to keep it in your context window

cwmoore

The human mind is not just biology in the same way that LLMs are just math.

ux266478

It's just provencial nonsense, there's no sound reasoning to it. Reductionism being taken and used as a form of refutation is a pretty common cargo culting behavior I've found.

Overwhelmingly, I just don't think the majority of human beings have the mental toolset to work with ambiguous philosophical contexts. They'll still try though, and what you get out of that is a 4th order baudrillardian simulation of reason.

tracerbulletx

Yeah. This whole AI situation has really exposed how bad most people are at considering the ontological and semantic content of the words they use.

omnicognate

Indeed, people confidently assert as established fact things like "brains are bound by the laws of physics" and therefore "there can't be anything special" about them, so "consciousness is an illusion" and "the mind is a computer", all with absolute conviction but with very little understanding of what physics and maths really do and do not say about the universe. It's a quasi-religious faith in a thing not fully comprehended. I hope in the long run some humility in the face of reality will eventually be (re)learned.

whoknowsidont

Lot's of assumptions about humanity and how unique we are constantly get paraded in this conversation. Ironically, the people who tout those perspectives are the least likely to understand why we're really not all that "special" from a very factual and academic perspective.

You'd think it would unlock certain concepts for this class of people, but ironically, they seem unable to digest the information and update their context.

lisbbb

A large number of adults I encounter are functionally illiterate, including business people in very high up positions. They are also almost 100% MATHEMATICALLY illiterate, not only unable to solve basic algebra and geometry problems, but completely unable to reason about statistical and probabilistic situations encountered in every day life. This is why gambling is so popular and why people are constantly fooled by politicians. It's bad enough to be without words in the modern world, but being without numbers makes you vulnerable to all manner of manipulations.

jvanderbot

Thinking is undefined so all statements about it are unverifiable.

terminalshort

Statements like "it is bound by the laws of physics" are not "verifiable" by your definition, and yet we safely assume it is true of everything. Everything except the human brain, that is, for which wild speculation that it may be supernatural is seemingly considered rational discussion so long as it satisfies people's needs to believe that they are somehow special in the universe.

jvanderbot

True. You need to define "it" before you can verify physics bounds it.

Unicorns are not bound by the laws of physics - because they do not exist.

gowld

> it satisfies people's needs to believe that they are somehow special in the universe.

Is it only humans that have this need? That makes the need special, so humans are special in the universe.

ben_w

I would say a different problem:

There's many definitions of "thinking".

AI and brains can do some, AI and brains definitely provably cannot do others, some others are untestable at present, and nobody really knows enough about what human brains do to be able to tell if or when some existing or future AI can do whatever is needed for the stuff we find special about ourselves.

A lot of people use different definitions, and respond to anyone pointing this out by denying the issue and claiming their own definition is the only sensible one and "obviously" everyone else (who isn't a weird pedant) uses it.

jvanderbot

This is not a meta-question.

The definition of "thinking" in any of the parent comments or TFA is actually not defined. Like literally no statements are made about what is being tested.

So, if we had that we could actually discuss it. Otherwise it's just opinions about what a person believes thinking is, combined with what LLMs are doing + what the person believes they themselves do + what they believe others do. It's entirely subjective with very low SNR b/c of those confounding factors.

BobaFloutist

What's a definition of thinking that brains definitely provably can't do?

d-lisp

Do you think that thinking is undefinable ? If thinking is definable, then all statements about it aren't unverifiable.

ablob

Caveat: if thinking is definable, then not all statements about it are unverifiable.

nh23423fefe

Is this some self refuting sentence?

d-lisp

I think they meant "Cannot evaluate : (is <undefined> like x ?), argument missing"

edit : Thinking is undefined, statements about undefined cannot be verified.

ux266478

is a meta-level grammar the same as an object-level grammar?

random9749832

Is reasoning undefined? That's what usually meant by "thinking".

nutjob2

Formal reasoning is defined, informal reasoning very much isn't.

CamperBob2

The difference between thinking and reasoning is that I can "think" that Elvis is still alive, Jewish space lasers are responsible for California wildfires, and Trump was re-elected president in 2020, but I cannot "reason" myself into those positions.

It ties into another aspect of these perennial threads, where it is somehow OK for humans to engage in deluded or hallucinatory thought, but when an AI model does it, it proves they don't "think."

terminalshort

I have yet to hear any plausible definition of "thought" that convincingly places LLMs and brains on opposite sides of it without being obviously contrived for that purpose.

snickerbockers

>Equivalent statements could be made about how human brains are not magic, just biology - yet I think we still think.

They're not equivalent at all because the AI is by no means biological. "It's just maths" could maybe be applied to humans but this is backed entirely by supposition and would ultimately just be an assumption of its own conclusion - that human brains work on the same underlying principles as AI because it is assumed that they're based on the same underlying principles as AI.

hnfong

Well, a better retort would be "Human brains are not magic, just physics. Protons, neutrons and electrons don't think".

But I think most people get what GP means.

criddell

Until you can define what thinking is, you can't assert that particles don't think (panpsychism).

observationist

Unless you're supposing something mystical or supernatural about how brains work, then yes, it is "just" math, there is nothing else it could be. All of the evidence we have shows it's an electrochemical network of neurons processing information. There's no evidence that suggests anything different, or even the need for anything different. There's no missing piece or deep mystery to it.

It's on those who want alternative explanations to demonstrate even the slightest need for them exists - there is no scientific evidence that exists which suggests the operation of brains as computers, as information processors, as substrate independent equivalents to Turing machines, are insufficient to any of the cognitive phenomena known across the entire domain of human knowledge.

We are brains in bone vats, connected to a wonderful and sophisticated sensorimotor platform, and our brains create the reality we experience by processing sensor data and constructing a simulation which we perceive as subjective experience.

The explanation we have is sufficient to the phenomenon. There's no need or benefit for searching for unnecessarily complicated alternative interpretations.

If you aren't satisfied with the explanation, it doesn't really matter - to quote one of Neil DeGrasse Tyson's best turns of phrase: "the universe is under no obligation to make sense to you"

If you can find evidence, any evidence whatsoever, and that evidence withstands scientific scrutiny, and it demands more than the explanation we currently have, then by all means, chase it down and find out more about how cognition works and expand our understanding of the universe. It simply doesn't look like we need anything more, in principle, to fully explain the nature of biological intelligence, and consciousness, and how brains work.

Mind as interdimensional radios, mystical souls and spirits, quantum tubules, none of that stuff has any basis in a ruthlessly rational and scientific review of the science of cognition.

That doesn't preclude souls and supernatural appearing phenomena or all manner of "other" things happening. There's simply no need to tie it in with cognition - neurotransmitters, biological networks, electrical activity, that's all you need.

johnsmith1840

AI operates alot like trees do as they are both using maths under the hood.

This is the point, we don't know the delta between brains and AI any assumption is equivalent to my statement.

jvanderbot

Math is a superset of both processes (can model/implement both), but that doesn't imply that they are equivalent.

pegasus

But parent didn't try to apply "it's just maths" to humans. He said one could just as easily say, as some do: "Humans are just biology, hence they're not magic". Our understanding of mathematics, including the maths of transformer models is limited, just as our understanding of biology. Some behaviors of these models have taken researches by surprise, and future surprises are not at all excluded. We don't know exactly how far they will evolve.

As for applying the word thinking to AI systems, it's already in common usage and this won't change. We don't have any other candidate words, and this one is the closest existing word for referencing a computational process which, one must admit, is in many ways (but definitely not in all ways) analogous to human thought.

ikrenji

Human brains might not be explained by the same type of math AI is explained with, but it will be some kind of math...

Mehvix

There's no reason to believe this to be the case. Godel says otherwise.

AlecSchueler

> that human brains work on the same underlying principles as AI

That wasn't the assumption though, it was only that human brains work by some "non-magical" electro-chemical process which could be described as a mechanism, whether that mechanism followed the same principles of AI or not.

mcswell

Straw man. The person who you're responding to talked about "equivalent statements" (emphasis added), whereas you appear to be talking about equivalent objects (AIs vs. brains), and pointing out the obvious flaw in this argument, that AIs aren't biology. The obvious flaw in the wrong argument, that is.

TallGuyShort

It's unfortunate that there's so little (none in the article, just 1 comment here as of this writing) mention of the Turing Test. The whole premise of the paper that introduced that was that "do machines think" is such a hard question to define that you have to frame the question differently. And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.

frozenlettuce

You can replicate all calculations done by LLMs with pen and paper. It would take ages to calculate anything, but it's possible. I don't think that pen and paper will ever "think", regardless of how complex the calculations involved are.

BobbyJo

If you put a droplet of water in a warm bowl every 12 hours, the bowl will remain empty as the water will evaporate. That does not mean that if you put a trillion droplets in every twelve hours it will still remain empty.

gus_massa

The official name is https://en.wikipedia.org/wiki/Chinese_room

The opinions are exactly the same than about LLM.

sigmoid10

And the counter argument is also exactly the same. Imagine you take one neuron from a brain and replace it with an artificial piece of electronics (e.g. some transistors) that only generates specific outputs based on inputs, exactly like the neuron does. Now replace another neuron. And another. Eventually, you will have the entire brain replaced with a huge set of fundamentally super simple transistors. I.e. a computer. If you believe that consciousness or the ability to think disappears somewhere during this process, then you are essentially believing in some religious meta-physics or soul-like component in our brains that can not be measured. But if it can not be measured, it fundamentally can not affect you in any way. So it doesn't matter for the experiment in the end, because the outcome would be exactly the same. The only reason you might think that you are conscious and the computer is not is because you believe so. But to an outsider observer, belief is all it is. Basically religion.

kipchak

It seems like the brain "just" being a giant number of neurons is an assumption. As I understand it's still an area of active research, for example the role of glial cells. The complete function may or may not be pen and paper-able.

bigfishrunning

> component in our brains that can not be measured.

"Can not be measured", probably not. "We don't know how to measure", almost certainly.

I am capable of belief, and I've seen no evidence that the computer is. It's also possible that I'm the only person that is conscious. It's even possible that you are!

danaris

But you are now arguing against a strawman, namely, "it is not possible to construct a computer that thinks".

The argument that was actually made was "LLMs do not think".

mcswell

I don't see the relevance of that argument (which other responders to your post have pointed out as Searle's Chinese Room argument). The pen and paper are of course not doing any thinking, but then the pen isn't doing any writing on its own, either. It's the system of pen + paper + human that's doing the thinking.

frozenlettuce

The idea of my argument is that I notice that people project some "ethereal" properties over computations that happen in the... computer. Probably because electricity is involved, making things show up as "magic" from our point of view, making it easier to project consciousness or thinking onto the device. The cloud makes that even more abstract. But if you are aware that the transistors are just a medium that replicates what we already did for ages with knots, fingers, and paint, it gets easier to see them as plain objects. Even the resulting artifacts that the machine produces are only something meaningful from our point of view, because you need prior knowledge to read the output signals. So yeah, those devices end up being an extension of ourselves.

Wowfunhappy

https://xkcd.com/505/

You can replicate the entire universe with pen and paper (or a bunch of rocks). It would take an unimaginably long time, and we haven't discovered all the calculations you'd need to do yet, but presumably they exist and this could be done.

Does that actually make a universe? I don't know!

The comic is meant to be a joke, I think, but I find myself thinking about it all the time!!!

umanwizard

You can simulate a human brain on pen and paper too.

palmotea

> You can simulate a human brain on pen and paper too.

That's an assumption, though. A plausible assumption, but still an assumption.

We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.

terminalshort

I can make a claim much stronger than "you could probably" The counterclaim here is that the brain may not obey physical laws that can be described by mathematics. This is a "5G causes covid" level claim. The overwhelming burden of proof is on you.

hnfong

This is basically the Church-Turing thesis and one of the motivations of using tape(paper) and an arbitrary alphabet in the Turing machine model.

It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).

phantasmish

The simulation isn't an operating brain. It's a description of one. What it "means" is imposed by us, what it actually is, is a shitload of graphite marks on paper or relays flipping around or rocks on sand or (pick your medium).

An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.

An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).

gnull

What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?

That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.

cibyr

It seems to me that the distinction becomes irrelevant as soon as you connect inputs and outputs to the real world. You wouldn't say that a 737 autopilot can never, ever fly a real jet and yet it behaves exactly the same whether it's up in the sky or hooked up to recorded/simulated signals on a test bench.

amelius

> An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.

It might if the simulation includes humans observing the candle.

pton_xd

So the brain is a mathematical artifact that operates independently from time? It just happens to be implemented using physics? Somehow I doubt it.

thrance

The brain follows the laws of physics. The laws of physics can be closely approximated by mathematical models. Thus, the brain can be closely approximated by mathematical models.

an0malous

Parent said replicate, as in deterministically

andrepd

It's an open problem whether you can or not.

space_fountain

It’s not that open. We can simulate smaller system of neurons just fine, we can simulate chemistry. There might be something beyond that in our brains for some reason, but it sees doubtful right now

thrance

You're arguing against Functionalism [0], of which I'd encourage you to at least read the Wikipedia page. Why would doing the brain's computations on pen and paper rather than on wetware lead to different outcomes? And how?

Connect your pen and paper operator to a brainless human body, and you got something indistinguishable from a regular alive human.

[0] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...

croemer

Don't think this is very good - more of a report of their activities. Underdelivers on the headline.

nevertoolate

- how to prove that humans can argue endlessly like an llm?

- ragebait them by saying AIs don’t think

- …

null

[deleted]

hamdingers

If it comes to the correct answer I don't particularly care how it got there.

emp17344

In most cases, you don’t know if it came to the correct answer.

null

[deleted]

hamdingers

In every reasonable use case for LLMs verifying the answer is trivial. Does the code do what I wanted it to? Does it link to a source that corroborates the response?

If you're asking for things you can't easily verify you're barking up the wrong tree.

ares623

How do you know if it came to the right answer?

mcswell

It's not always the case, but often verifying an answer is far easier than coming up with the answer in the first place. That's precisely the principle behind the RSA algorithm for cryptography.

downboots

Sure, it's easy to check ((sqrt(x-3)+1)/(x/8)) is less than 4. Now do it without calculus.

Very much like this effect https://www.reddit.com/r/opticalillusions/comments/1cedtcp/s... . Shouldn't hide complexity under a truth value.

bondarchuk

I wish I would've learned about ANNs in elementary school. It looks like a worthwhile and cool lesson package, if only they'd do away with the idiotic dogma...

terminalshort

> the team wants to tackle a major and common misconception: that students think that ANN systems learn, recognise, see, and understand, when really it’s all just maths

This is completely idiotic. Do these people actually believe that showing it can't be actual thought because it is described by math?

nomel

Wait until they hear about the physics/maths related to neurons firing!

josefritzishere

I think we all intuitively knew this but it's pretty cool.

brador

Do we think?

By every scientific measure we have the answer is no. It’s just electrical current taking the path of least resistance through connected neurons mixed with cell death.

The fact a human brain peaks at IQ around 200 is fascinating. Can the scale even go higher? It would seem no since nothing has achieved a higher score it must not exist.

bigfishrunning

The IQ scale is constantly adjusted to keep the peak of the curve at 100 and the standard deviation around 15. To say it peaks around 200 is a pretty gross misunderstanding of what IQ means.

ares623

3 years ago this is the kind of posts that end up in /r/im14andthisisdeep