Skip to content(if available)orjump to list(if available)

Why Claude's Comment Paper Is a Poor Rebuttal

roenxi

Has anyone come up with a definition of AGI where humans are near-universally capable of GI? These articles seem to be slowly pushing the boundaries past the point where slower humans are disbarred from intelligence.

Many years ago I bumped in to Towers of Hanoi in a computer game and failed to solve it algorithmicly, so I suppose I'm lucky I only work a knowledge job rather than an intelligence-based one.

parodysbird

The original Turing Test was one of the more interesting standards... An expert judge talks with two subjects in order to determine which is the human: one is a human who knows the point of the test, and one is machine trying to fool the judge into being no better than a coin flip at correctly choosing who was human. Allow for many judges and experience in each etc.

The brilliance of the test, which was strangely lost on Turing, is that the test is doubtful to be passed with any enduring consistency. Intelligence is actually more of a social description. Solving puzzles, playing tricky games, etc is only intelligent if we agree that the actor involved faces normal human constraints or more. We don't actually think machines fulfill that (they obviously do not, that's why we build them: to overcome our own constraints), and so this is why calculating logarithms or playing chess ultimately do not end up counting as actual intelligence when a machine does them.

cardanome

People confuse performance and internal presentation.

A simple calculator is vastly better as adding numbers than any human. An chess engine will rival any human grand master. No one would say that this got us closer to AGI.

We could absolutely see LLMs that produce poetry that humans can not tell apart or even prefer to human made poetry. We could have LLMs that are perfectly able to convince humans that they have consciousness and emotions.

Would we have have achieved AGI then? Does that mean those LLMs have gotten consciousness and emotions? No.

The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output. In fact the first AGI might perform significantly worse in most tasks that current LLMs.

LLMs are extremely impressive but they are not thinking. They do not have consciousness. It might be technically impossible for them to develop anything like that or at least it would require significantly bigger models.

> where slower humans are disbarred from intelligence

Humans have value for being humans. Whether they are slow or fast at thinking. Whether they are neurodivergent or neurotypical. We all have feelings, we are all capable of suffering, we are all alive.

See also the problems with AI Welfare research: https://substack.com/home/post/p-165615548

saberience

The problem with your argument is the idea that there is this special thing called "consciousness" that humans have and AI "doesn't".

Philosophers, scientists, thinkers have been trying to define "consciousness" for 100+ years at this point and no one has managed to either a) define it, or b) find ways to test for it.

Saying we have "consciousness" and AI "doesn't" is like saying we have a soul, a ghost in the machine, and AI doesn't. Do we really have a ghost in the machine? Or are we just really a big deterministic machine that we just don't fully understand yet, rather like AI.

So before you assert that we are "conscious", you should first define what you mean by that term and how we test for it conclusively.

staticman2

Before you assert nobody has defined consciousness you should maybe consult the dictionary?

542354234235

>The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output.

But we don’t really understand how the reasoning is happening in humans. Tests show that our subconscious, completely outside out conscious understanding, makes decisions before we perceive that we consciously decide something [1]. Our consciousness is the output, but we don’t really know what is running in the subconscious. If something looked at it from an outside perspective, would they say that it was just unconscious programing, giving the appearance of conscious reasoning?

I’m not saying LLMs are conscious. But since we don’t really know what gives us the feeling of consciousness, and we didn’t build and don’t understand the underlying “programing”, it is hard to actually judge a non-organic mind that claims the feeling of consciousness. If you found out today that you were actually a computer program, would you say you weren’t conscious? Would you be able to convince “real” people that you were conscious?

[1] https://qz.com/1569158/neuroscientists-read-unconscious-brai...

cardanome

My point was we can can't prove that LLM's have consciousness. Yes, the reverse is also true. It is possible that we wouldn't really be able to tell if a AI gained consciousness as that might look very differently than we expect.

An important standard for any scientific theory or hypothesis is to be falsifiable . Good old Russell's teapot. We can't disprove that that a small teapot too small to be seen by telescopes, orbits the Sun somewhere in space between the Earth and Mars. So should we assume it is true? No the burden of proof lies on those that make the claim.

So yes, I could not 100 percent disprove that certain LLM's don't show signs of consciousness but that is reversing the burden of proof. Those that make the claims that LLM's are capable of suffering, that they show signs of consciousness need to deliver. If they can't, it is reasonable to assume they are full of shit.

People here accuse me to be scholastic and too philosophical but the the reverse is true. Yes, we barely know how human brains work and how consciousness evolved but whoever doesn't see the qualitative difference between a human being and an LLM really needs to touch grass.

_aavaa_

Consciousness is irrelevant to discussions of intelligence (much less AGI) unless you pick a circular definition for both.

This is “how many angels dance on the head of a pin” territory.

cwillu

I would absolutely say both the calculator and strong chess engines brought us closer.

virgilp

> Does that mean those LLMs have gotten consciousness and emotions? No.

Is this a belief statement, or a provable one?

Lerc

I think it is clearly true that it doesn't show that they have consciousness and emotions.

The problem is that people assume that failing to show that they do means that they don't.

It's very hard to show that something doesn't have consciousness. Try and conclusively prove that a rock does not have consciousness.

Asraelite

Years ago when online discussion around this topic was mostly done by small communities talking about the singularity and such, I felt like there was a pretty clear definition.

Humans are capable of consistently making scientific progress. That means being taught knowledge about the world by their ancestors, performing new experiments, and building upon that knowledge for future generations. Critically, there doesn't seem to be an end to this for the foreseeable future for any field of research. Nobody is predicting that all scientific progress will halt in a few decades because after a certain point it becomes too hard for humans to understand anything, although that probably would eventually become true.

So an AI with at least the same capabilities as a human would be able to do any type of scientific research, including research into AI itself. This is the "general" part: no matter where the research takes it, it must always be able to make progress, even if slowly. Once such an AI exists, the singularity begins.

I think the fact that AI is now a real thing with a tangible economic impact has drawn the attention of a lot of people who wouldn't have otherwise cared about the long-term implications for humanity of exponential intelligence growth. The question that's immediately important now is "will this replace my job?" and so the definitions of AGI that people choose to use are shifting more toward definitions that address those questions.

null

[deleted]

suddenlybananas

Someone who can reliably solve towers of Hanoi with n=4 and who has been told the algorithm should be able to do it with n=6,7,8. Don't forget, these models aren't learning how to do it from scratch the way a child might.

morsecodist

AGI is a marketing term. It has no consistent definition. It's not very useful when trying to reason about AI's capabilities.

badgersnake

Mass Effect?

littlestymaar

> These articles seem to be slowly pushing the boundaries past the point where slower humans are disbarred from intelligence.

It's not really pushing boundaries, a non trivial amount of humans has always been excluded from the definition of “human intelligence” (and with the ageing of the population, this number is only going up), and it makes sense, like how you don't consider blind individuals when you're comparing humans sight to other animals'.

robertk

The Apple paper does not look at its own data — the model outputs become short past some thresholds because the models reflectively realize they do not have the context to respond in the steps as requested, and suggest a Python program instead, just as a human would. One of the penalized environments is proven impossible to solve in the literature for n>6, seemingly unaware to the authors. I consider this and more the definitive rebuttal of the sloppiness of the paper: https://www.alignmentforum.org/posts/5uw26uDdFbFQgKzih/bewar...

suddenlybananas

The n>6 result is to find the shortest solution not any solution. I don't get why people are so butthurt about this paper.

Herring

Apple's tune will completely change the second they get a leading LLM - Look at all the super important and useful things you can do with "Apple General Intelligence"!

jmsdnns

No, it wont. This comment essentially says science doesnt matter for anyone, only whether or not they're leading in marketing.

Lerc

I think it's closer to saying that those who are falling behind declare that the science doesn't matter.

jstanley

I think the comment says science doesn't matter for Apple, specifically.

throwaway287391

As someone who used to write academic ML papers, it's funny to me that people are treating this academic style paper written by a few Apple researchers as Apple's official company-wide stance, especially given the first author was an intern.

I suppose it's "fair" since it's published on the Apple website with the authors' Apple affiliations, but historically speaking, at least in ML where publication is relatively fast-paced and low-overhead, academic papers by small teams of individual researchers have in no way reflected the opinions of e.g. the executives of a large company. I would not be particularly surprised to see another team of Apple researchers publishing a paper in the coming weeks with the opposite take, for example.

jmsdnns

The author is an intern, but they're also almost done with their PhD. They're not just any intern.

throwaway287391

That's kind of expected for a research intern -- internships are most commonly done within 1-2 years before graduation. But in any case, the fact that the first author is an intern is just the cherry on top for me -- my comment would be the same modulo the "especially" remark if all the authors were full time research staff.

null

[deleted]

djoldman

> I would consider this a death blow paper to the current push for using LLMs and LRMs as the basis for AGI.

Anytime I see "Artificial General Intelligence," "AGI," "ASI," etc., I mentally replace it with "something no one has defined meaningfully."

Or the long version: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."

Or the short versions: "Skippetyboop," "plipnikop," and "zingybang."

chrsw

One vague definition I see tossed around a lot "something can replace almost any human knowledge/white collar worker".

What does that mean in concrete terms? I'm not sure. Many of these models can already pass bar exams but how many can be lawyers? Probably none. What's missing?

thaumasiotes

> Probably none.

The qualification is unnecessary; we know the answer is "none". There's a steady stream of lawyers getting penalized for submitting LLM output to judges.

chrsw

You're right. I should have said "can ever". Both in terms of permitted to and in terms of have the capacity to. And I'm only referring to current machine learning architectures.

yahoozoo

I know it when I see it

pmarreck

The only definition by which porn, beauty, intelligence, aliveness, and creativity are all known

(seriously, forget this stuff: Consider how you come up with an algorithm for how creative or beautiful something is?)

A line that used to work (back when I was in a part of my life where lines were a thing) was "I can tell you're smart, which means that you can tell I'm smart, because like sees like." Usually got a smile out of 'em.

pmarreck

I like Sundar Pichai's: "Artificial Jagged Intelligence" (AJI)

coffeefirst

“The Messiah.” The believers know it’s coming and will transform the world in ways that don’t even make sense to outsiders. They do not change their mind when it doesn’t happen as foreseen.

pu_pe

The author's main point is that output token constraints should not be the root cause for poor performance in reasoning tests, as in many cases the LLMs did not even come close to exceeding their token budgets before giving up.

While that may be true, do we understand how LLMs behave according to token budget constraints? This might impact much simpler tasks as well. If we give them a task to list the names of all cities in the world according to population, do they spit out a python script if we give them a 4k output token budget but a full list if we give them 100k?

gjm11

This rebuttal-of-a-rebuttal looks to me as if it gets one (fairly important) thing right but pretty much everything else wrong. (Not all in the same direction; the rebuttal^2 fails to point out what seems to me to be the single biggest deficiency in the rebuttal.)

The thing it gets right: the "Illusion of illusion" rebuttal claims that in the original "Illusion of Thinking" paper's version of the Towers of Hanoi problem, "The authors’ evaluation format requires outputting the full sequence of moves at each step, leading to quadratic token growth"; this doesn't seem to be true at all, and this "Beyond Token Limits" rebuttal^2 is correct to point it out.

(This implies, in particular, that there's something fishy in the IoI rebuttal's little table showing where 5(2^n-1)^2 exceeds the token budget, which they claim explains the alleged "collapse" at roughly those points.)

Things it gets wrong:

"The rebuttal conflates solution length with computational difficulty". This is just flatly false. The IoI rebuttal explicitly makes pretty much the same points as the BTL rebuttal^2 does here.

"The rebuttal paper’s own data contradicts its thesis. Its own data shows that models can generate long sequences when they choose to, but in the findings of the original Apple paper, it finds that models systematically choose NOT to generate longer reasoning traces on harder problems, effectively just giving up." I don't see anything in the rebuttal that "shows that models can generate long sequences when they choose to". What the rebuttal finds is that (specifically for the ToH problem) if you allow the models to answer by describing the procedure rather than enumerating all its steps, they can do it. The original paper didn't allow them to do this. There's no contradiction here.

"It instead completely ignores this finding [that once solutions reach a certain level of difficulty the models give up trying to give complete answers] and offers no explanation as to why models would systematically reduce computational effort when faced with harder problems."

The rebuttal doesn't completely ignore this finding. That little table of alleged ToH token counts is precisely targeted at this finding. (It seems like it's wrong, which is important, but the problem here isn't that the paper ignores this issue, it's that it has a mistake that invalidates how it addresses the issue.)

Things that a good rebuttal^2 should point out but this rebuttal completely ignores:

The most glaring one, to me, is that the rebuttal focuses almost entirely on the Tower of Hanoi, where there's a plausible "the only problem is that there aren't enough tokens" issue, and largely ignores the other problems that the original paper also claims to find "collapse" problems with. Maybe token-limit issues are also sufficient explanation for the problems with other models (e.g., if something is effectively only solvable by exhaustive search, then maybe there aren't enough tokens for the model to do that search in) but the rebuttal never actually makes that argument (e.g., by estimating how many tokens are needed to do the relevant exhaustive search).

The rebuttal does point out what if correct is a serious problem with the original paper's treatment of the "River Crossing" problem (apparently the problem they asked the AI to solve is literally unsolvable for many of the cases they put to it), but the unsolvability starts at N=6 and the original paper finds that the models were unable to solve the problem starting at N=3.

(Anecdata: I had a go at solving the River Crossing problem for N=3 myself. I made a stupid mistake that stopped me finding a solution and didn't have sufficient patience to track it down. My guess is that if you could spawn many independent copies of me and ask them all to solve it, probably about 2/3 would solve it and 1/3 would screw up in something like the way actual-me did. If I actually needed to solve it for larger N I'd write some code, which I suspect the AI models could do about as well as I could. For what it's worth, I think the amount of text-editor scribbling I did while not solving the puzzle was quite a bit less than the thinking-token limits these models had.)

The rebuttal^2 does complain about the "narrow focus" of the rebuttal, but it means something else by that.

amelius

Is there anything falsifiable in Apple's paper?

null

[deleted]

low_tech_love

A fundamental problem that we’re still far away from solving is not necessarily that LLMs/LRMs cannot reason the same way that we do (which I guess should be clear by now); but that they might not have to. They generate slop so fast that, if one can benefit a little bit from each output, i.e. if you can find a little bit of use hidden beneath the mountain of meaningless text they’ll create, then this might still be more valuable than preemptively taking the time to create something more meaningful to begin with. I can’t say for sure what is the reward system behind LLM use in general, but given how much money people are willing to spend with models even in their current deeply flawed state, I’d say it’s clear that the time savings are outweighing the mistakes and shallowness.

Take the comment paper, for example. Since Claude Opus is the first author, I’m assuming that the human author took a backseat and let the AI build the reasoning and most of the writing. Unsurprisingly, it is full of errors and contradictions, to a point where it looks like the human author didn’t bother too much to check what was being published. One might say that the human author, in trying to build some reputation by showing that their model could answer a scientific criticism, actually did the opposite: it provided more evidence that its model cannot reason deeply, and maybe hurt their reputation even more.

But the real question is, did they really? How much backlash will they possibly get from submitting this to arxiv without checking? Would that backlash keep them from submitting 10 more papers next week with Claude as the first author? If one puts in a balance the amount of slop you can put out (with a slight benefit) vs. the bad reputation one gets from it, I cannot say that “human thinking” is actually worth it anymore.

practice9

The human is a bad co-author here really.

I deployed lots of high performance, clean, well documented etc code generated by Claude or o3. I reviewed it wrt requirements, added tests and so on. Even with that in mind it allowed me to work 3x faster.

But it required conscious effort on my part to point out issues and inefficiencies on LLMs part.

It is a collaborative type of work where LLMs shine (even in so called agentic flows)

iLoveOncall

Mediocre people produce mediocre work. Using AI might make those mediocre people produce even worse work, but I don't think it'll affect competent people who have standards regardless of the available tooling.

If anything the outcome will be good: mediocre people will produce even worse work and will weed themselves out.

Cause in point: the author of the rebuttal made basic and obvious mistakes that make his work even easier to dismiss and no further paper of his will be considered seriously.

Arainach

>mediocre people will produce even worse work and will weed themselves out.

[[Citation needed]]

I don't believe anyone who has experienced working with other people - in the workspace, in school, whatever - believes that people get weeded out for mediocre output.

Muromec

Weeded out to where anyway? Doing some silly thing, like being a cashier or taxi driver?

delusional

You can also be mediocre in a lot of different ways. Some people are mediocre thinkers, but fantastic hype men. Some people are fantastic at thinking, but suck at playing the political games you have to play in an office. Personally I find that I need some of all of those aspects to have success in a project, the amount varies by the work and external collaborators.

Intelligence isn't just one measure you can have less or more of. I thought we figured this out 20 years ago.

drsim

I think the pull will be hard to resist even for competent people.

Like the obesity crisis driven by sugar highs, the overall population will be affected, and overall quality will suffer, at least for a while.

bananapub

> Mediocre people produce mediocre work. Using AI might make those mediocre people produce even worse work, but I don't think it'll affect competent people who have standards regardless of the available tooling.

this is clearly not the case, given:

- mass layoffs in the tech industry to force more use of such things - extremely strong pressure from management to use it, rarely framed as "please use this tooling as you see fit" - extremely low quality bars in all sorts of things, e.g. getting your dumb "We wrote a 200 word prompt then stuck that and some web scraped data in to an LLM run by Google/OpenAI/Anthropic" site to the top of hacker news, or most of VC funding in the tech world - extremely large swathes of (at least) the western power structures not giving a shit about doing anything well, e.g. the entire US Federal government leadership now, or the UK government's endless idiocy about "AI Policy development", lawyers getting caught in court having just not even read the documents they put their name on, etc - actual strong desire from many people to outsource their toxic plans to "AI", e.g. the US's machine learning probation or sentencing stuff

I don't think any of us are ready for the tsunami of garbage that's going to be thrown in to every facet of our lives, from government policy to sending people to jail to murdering people with robots to spamming open source projects with useless code and bug reports etc etc etc