Skip to content(if available)orjump to list(if available)

AGI is Mathematically Impossible 2: When Entropy Returns

ICBTheory

This paper presents a theoretical proof that AGI systems will structurally collapse under certain semantic conditions — not due to lack of compute, but because of how entropy behaves in heavy-tailed decision spaces.

The idea is called IOpenER: Information Opens, Entropy Rises. It builds on Shannon’s information theory to show that in specific problem classes (those with α ≤ 1), adding information doesn’t reduce uncertainty — it increases it. The system can’t converge, because meaning itself keeps multiplying.

The core concept — entropy divergence in these spaces — was already present in my earlier paper, uploaded to PhilArchive on June 1. This version formalizes it. Apple’s study, The Illusion of Thinking, was published a few days later. It shows that frontier reasoning models like Claude 3.7 and DeepSeek-R1 break down exactly when problem complexity increases — despite adequate inference budget.

I didn’t write this paper in response to Apple’s work. But the alignment is striking. Their empirical findings seem to match what IOpenER predicts.

Curious what this community thinks: is this a meaningful convergence, or just an interesting coincidence?

Links:

This paper (entropy + IOpenER): https://philarchive.org/archive/SCHAIM-14

First paper (ICB + computability): https://philpapers.org/archive/SCHAII-17.pdf

Apple’s study: https://machinelearning.apple.com/research/illusion-of-think...

vessenes

Thanks for this - Looking forward to reading the full paper.

That said, the most obvious objection that comes to mind about the title is that … well, I feel that I’m generally intelligent, and therefore general intelligence of some sort is clearly not impossible.

Can you give a short précis as to how you are distinguishing humans and the “A” in artificial?

jemmyw

I would argue that you are not a general intelligence. Humans have quite a specific intelligence. It might be the broadest, most general, among animal species, but it is not general. That manifests in that we each need to spend a significant amount of time training ourselves for specific areas of capability. You can't then switch instantly to another area without further training, even though all the context materials are available to you.

Tadpole9181

This seems like a meaningless distinction in context. When people say AGI, they clearly mean "effectively human intelligence". Not an infallible, completely deterministic, omniscient god-machine.

ICBTheory

Sure I can (and thanks for writing)

Well, given the specific way you asked that question I confirm your self assertion - and am quite certain that your level of Artificiality converges to zero, which would make you a GI without A...

- You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity

A "précis" as you wished: Artificial — in the sense used here (apart from the usual "planfully built/programmed system" etc.) — algorithmic, formal, symbol-bound.

Humans as "cognitive system" have some similar traits of course - but obviously, there seems to be more than that.

kevin42

>but obviously, there seems to be more than that.

I don't see how that's obvious. I'm not trying to be argumentative here, but it seems like these arguments always come down to a qualia, or the insistence that humans have some sort of 'spark' that machines don't have, therefore: AGI is not possible since machines don't have it.

I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?

What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of?

rusk

Not the person asked, but in time honoured tradition I will venture forth that the key difference is billions of years of evolution. Innumerable blooms and culls. And a system that is vertically integrated to its core and self sustaining.

ben_w

The mathematical proof, as you describe it, sounds like the "No Free Lunch theorem". Humans also can't generalise to learning such things.

As you note in 2.1, there is widespread disagreement on what "AGI" means. I note that you list several definitions which are essentially "is human equivalent". As humans can be reduced to physics, and physics can be expressed as a computer program, obviously any such definition can be achieved by a sufficiently powerful computer.

For 3.1, you assert:

"""

Now, let's observe what happens when an Al system - equipped with state-of-the-art natural language processing, sentiment analysis, and social reasoning - attempts to navigate this question. The Al begins its analysis:

• Option 1: Truthful response based on biometric data → Calculates likely negative emotional impact → Adjusts for honesty parameter → But wait, what about relationship history? → Recalculating...

• Option 2: Diplomatic deflection → Analyzing 10,000 successful deflection patterns → But tone matters → Analyzing micro-expressions needed → But timing matters → But past conversations matter → Still calculating...

• Option 3: Affectionate redirect → Processing optimal sentiment → But what IS optimal here? The goal keeps shifting → Is it honesty? Harmony? Trust? → Parameters unstable → Still calculating...

• Option n: ....

Strange, isn't it? The Al hasn't crashed. It's still running. In fact, it's generating more and more nuanced analyses. Each additional factor may open ten new considerations. It's not getting closer to an answer - it's diverging.

"""

Which AI? ChatGPT just gives an answer. Your other supposed examples have similar issues in that it looks like you've *imagined* an AI rather than having tried asking an AI to seeing what it actually does or doesn't do.

I'm not reading 47 pages to check for other similar issues.

ICBTheory

1. I appreciate the comparison — but I’d argue this goes somewhat beyond the No Free Lunch theorem.

NFL says: no optimizer performs best across all domains. But the core of this paper doesnt talk about performance variability, it’s about structural inaccessibility. Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

2. OMG, lool. ... just to clarify, there’s been a major misunderstanding :)

the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

So - NOT a real thread, - NOT a real dialogue with my wife... - just an exemplary case... - No, I am not brain dead and/or categorically suicidal!! - And just to be clear: I dont write this while sitting in some marital counseling appointment, or in my lawyer's office, the ER, or in a coroners drawer

--> It’s a stylized, composite example of a class of decision contexts that resist algorithmic resolution — where tone, timing, prior context, and social nuance create an uncomputably divergent response space.

Again : No spouse was harmed in the making of that example.

;-))))

WhitneyLand

“This paper presents a theoretical proof that AGI systems will structurally collapse under certain semantic conditions…”

No it doesn’t.

Shannon entropy measures statistical uncertainty in data. It says nothing about whether an agent can invent new conceptual frames. Equating “frame changes” with rising entropy is a metaphor, not a theorem, so it doesn’t even make sense as a mathematical proof.

This is philosophical musing at best.

ICBTheory

Correct: Shannon entropy originally measures statistical uncertainty over a fixed symbol space. When the system is fed additional information/data, then entropy goes down, uncertainty falls. This is always true in situations where the possible outcomes are a) sufficiently limited and b)unequally distributed. In such cases, with enough input, the system can collapse the uncertainty function within a finite number of steps.

But the paper doesn’t just restate Shannon.

It extends this very formalism to semantic spaces where the symbol set itself becomes unstable. These situations arise when (a) entropy is calculated across interpretive layers (as in LLMs), and (b) the probability distribution follows a heavy-tailed regime (α ≤ 1). Under these conditions, entropy divergence becomes mathematically provable.

This is far from being metaphorical: it’s backed by formal Coq-style proofs (see Appendix C in he paper).

AND: it is exactly the mechanism that can explain the Apple-Papers' results

yodon

I'm wondering if you may have rediscovered the concept of "Wicked Problems", which have been studied in system analysis and sociology since the 1970's (I'd cite the Wikipedia page, but I've never been particularly fond of Wikipedia's write up on them). They may be worth reading up on if you're not familiar with them.

gremlinsinc

does this include if the AI can devise new components and use drones and things essentially to build a new iteration of itself more capable to compute a thing and keep repeating this going out into the universe as needed for resources and using von Neumann probes.. etc?

proc0

The paper is skipping over the definition of AI. It jumps right into AGI, and that depends on what AI means. It could be LLMs, deep neural networks, or any possible implementation on a Turing machine. The latter I suspect would be extremely difficult to prove. So far almost everything can be simulated by Turing machines and there's no reason it couldn't also simulate human brains, and therefore AGI. Even if the claim is that human brains are not enough for GI (and that our bodies are also part of the intelligence equation), we could still simulate an entire human being down to every cell, in theory (although in practice it wouldn't happen anytime soon, unless maybe quantum computers, but I digress).

Still an interesting take and will need to dive in more, but already if we assume the brain is doing information processing then the immediate question is how can the brain avoid this problem, as others are pointing out. Is biological computation/intelligence special?

Takashoo

Turing machines only model computation. Real life is interaction. Check the work of Peter Wegner. When interaction machines enter into the picture, AI can be embodied, situated and participate in adaptation processes. The emergent behaviour may bring AGI in a pragmatic perspective. But interaction is far more expressive than computation rendering theoretical analysis challenging.

proc0

Interaction is just another computation, and clearly we can interact with computers, and also simulate that interaction within the computer, so yes Turing machines can handle it. I'll check out Wegner.

predrag_peter

The difference between human and artificial intelligence (whatever "intelligence" is) is in the following: - AI is COMPLICATED (e.g. the World's Internet) yet it is REDUCIBLE and it is COUNTABLE (even if infinite) - Human intelligence is COMPLEX; it is IRREDUCIBLE (and it does not need to be large; 3 is a good number for a complex system) - AI has a chance of developing useful tools and methods and will certainly advance our civilization; it should not, however, be confused with intelligence (except by persons who do not discern complicated from complex) - Everything else is poppycock

Animats

Penrose did this argument better.[1] Penrose has been making that argument for thirty years, and it played better before AI started getting good.

AI via LLMs has limitations, but they don't come from computability.

[1] https://sortingsearching.com/2021/07/18/roger-penrose-ai-ske...

ICBTheory

Thanks — and yes, Penrose’s argument is well known.

But this isn’t that, as I’m not making a claim about consciousness or invoking quantum physics or microtubules (which, I agree, are highly speculative).

The core of my argument is based on computability and information theory — not biology. Specifically: that algorithmic systems hit hard formal limits in decision contexts with irreducible complexity or semantic divergence, and those limits are provable using existing mathematical tools (Shannon, Rice, etc.).

So in some way, this is the non-microtubule version of AI critique. I don’t have the physics background to engage in Nobel-level quantum speculation — and, luckily, it’s not needed here.

viralsink

If I understood correctly, this is about finding solutions to problems that have an infinite solution space, where new information does not constrain it.

Humans don't have the processing power to traverse such vast spaces. We use heuristics, in the same way a chess player does not iterate over all possible moves.

It's a valid point to make, however I'd say this just points to any AGI-like system having the same epistemological issues as humans, and there's no way around it because of the nature of information.

Stephen Wolfram's computational irreducibility is another one of the issues any self-guided, phyiscally grounded computing engine must have. There are problems that need to be calculated whole. Thinking long and hard about possible end-states won't help. So one would rather have 10000 AGIs doing somewhat similar random search in the hopes that one finds something useful.

I guess this is what we do in global-scale scientific research.

tim333

This sounds rather silly. Given the usual definition of AGI as being human like intelligence with some variation on how smart the humans are, and the fact that humans use a network of neurons that can largely be simulated by an artificial network of neurons, it's probably twaddle largely.

daedrdev

Clearly nature avoids this problem. So theoretically by replicating natural selection or something else in AI models, which arguably we already do, the theoretical entropy trap clearly can be avoided, we aren't even potentially decreasing entropy with AI training since doing so uses power generation which increases entropy

rusk

It can be avoided certainly, but can it be avoided with the current or near term technology about which many are saying “it’s only a matter of time”

kevin42

I like the distinction you made there. My observation that when it comes to AGI, there are those who are saying "Not possible with the current technology." and "Not possible at all, because humans have [insert some characteristic here about self awareness, true creativity, etc] and machines don't.

I can respect the first argument. I personally don't see any reason to believe AGI is impossible, but I also don't see evidence that it is possible with the current (very impressive) technology. We may never build an AGI in my lifetime, maybe not ever, but that doesn't mean it's not possible.

But the second argument, that humans do something machines aren't capable of always falls flat to me for lack of evidence. If we're going to dismiss the possibility of something, we shouldn't do it without evidence. We don't have a full model of human intelligence, so I think it's premature to assume we know what isn't possible. All the evidence we have is that humans are biological machines, everything follows the laws of physics, and yet, here we are. There isn't evidence that anything else is going on other than physical phenomenon, and there isn't any physical evidence that a biological machine can't be emulated.

kelseyfrog

> And - as wonderfully remarkable as such a system might be - it would, for our investigation, be neither appropriate nor fair to overburden AGI by an operational definition whose implicit metaphysics and its latent ontological worldviews lead to the epistemology of what we might call a “total isomorphic a priori” that produces an algorithmic world-formula that is identical with the world itself (which would then make the world an ontological algorithm...?).

> Anyway, this is not part of the questions this paper seeks to answer. Neither will we wonder in what way it could make sense to measure the strength of a model by its ability to find its relative position to the object it models. Instead, we chose to stay ignorant - or agnostic? - and take this fallible system called "human". As a point of reference.

Cowards.

That's the main counter argument and acknowledging its existence without addressing it is a craven dodge.

Assuming the assumptions[1] are true, then human intelligence isn't even able to be formalized under the same pretext.

Either human intelligence isn't

1. Algorithmic. The main point of contention. If humans aren't algorithmically reducible - even at the level computation of physics, then human cognition is supernatural.

2. Autonomous. Trivially true given that humans are the baseline.

3. Comprehensive (general): Trivially true since humans are the baseline.

4. Competent: Trivially true given humans are the baseline.

I'm not sure how they reconcile this given that they simply dodge the consequences that it implies.

Overall, not a great paper. It's much more likely that their formalism is wrong than their conclusion.

Footnotes

1. not even the consequences, unfortunately for the authors.

ICBTheory

Just to make sure I understand:

–Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted? Or better: is that metaphysical setup an argument?

If that’s the game, fine. Here we go:

– The claim that one can build a true, perfectly detailed, exact map of reality is… well... ambitious. It sits remarkably far from anything resembling science , since it’s conveniently untouched by that nitpicky empirical thing called evidence. But sure: freed from falsifiability, it can dream big and give birth to its omnicartographic offspring.

– oh, quick follow-up: does that “perfect map” include itself? If so... say hi to Alan Turing. If not... well, greetings to Herr Goedel.

– Also: if the world only shows itself through perception and cognition, how exactly do you map it “as it truly is”? What are you comparing your map to — other observations? Another map?

– How many properties, relations, transformations, and dimensions does the world have? Over time? Across domains? Under multiple perspectives? Go ahead, I’ll wait... (oh, and: hi too.. you know who)

And btw the true detailed map of the world exists.... It’s the world.

It’s just sort of hard to get a copy of it. Not enough material available ... and/or not enough compute....

P.S. Sorry if that came off sharp — bit of a spur-of-the-moment reply. If you want to actually dig into this seriously, I’d be happy to.

cainxinth

The crux here is the definition of AGI. The author seems to say that only an endgame, perfect information processing system is AGI. But that definition is too strict because we might develop something that is very far from perfect but which still feels enough like AGI to call it that.

like_any_other

So does the human brain transcend math, or are humans not generally intelligent?

geoka9

Humans are fallible in a way computers are not. One could argue any creative process is an exercise in fallibility.

More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.

[0]https://www.youtube.com/watch?v=LSHZ_b05W7o

ben_w

Hang on, hasn't everyone spent the past few years complaining about LLMs and diffusion models being very fallible?

And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?

null

[deleted]

ICBTheory

Hi and thanks for engaging :-)

Well, it in fact depends on what intelligence is to your understanding:

-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.

- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.

- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.

The main point is: neither algorithms nor rationality can point beyond itself.

In other words: You cannot think out of the box - thinking IS the box.

(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)

like_any_other

Let me steal another users alternate phrasing: Since humans and computers are both bound by the same physical laws, why does your proof not apply to humans?

ICBTheory

Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving. (And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)

2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.

3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.

In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.

Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"

null

[deleted]

autobodie

Humans do a lot of things that computers don't, such as be born, age (verb), die, get hungry, fall in love, reproduce, and more. Computers can only metaphorically do these things, human learning is correlated with all of them, and we don't confidently know how. Have some humility.

onlyrealcuzzo

The point is that if it's mathematically possible for humans, than it naively would be possible for computers.

All of that just sounds hard, not mathematically impossible.

As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.

andyjohnson0

TFA presents an information-theoretic argument forAGI being impossible. My reading of your parent commenter is that they are asking why this argument does not also apply to humans.

You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).

daedrdev

Taking GLP-1 makes me question how much hunger is really me versus my hormones controlling me.

ninetyninenine

We don’t even know how LLMs work. But we do know the underlying mechanisms are governed by math because we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.

So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.

What does humility have to do with anything?

hnfong

> we have a theory of reality that governs things down to the atomic scale and humans and LLMs are made out of atoms.

> So because of this we know reality is governed by maths.

That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.

Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.

> What does humility have to do with anything?

Not the GP but I think humility is kinda relevant here.

bigyabai

> We don’t even know how LLMs work

Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.

We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.

IAmGraydon

>We don’t even know how LLMs work.

Care to elaborate? Because that is utter nonsense.

ffwd

I think humans have some kind of algorithm for deciding what's true and consolidating information. What that is I don't know.

fellowniusmonk

This paper is about the limits in current systems.

Ai currently has issues with seeing what's missing. Seeing the negative space.

When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.

Basically the ability to say, "this has stopped making sense" and stop or change approach.

Also, we clearly do path exploration and semantic compression in our sleep.

We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).

Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.

I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.

We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.

There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.

ffwd

Yep definitely agree with this.

ICBTheory

I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.

ffwd

Why can't it be algorithmic? If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.

Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.

xeonmc

I think the latter fact is quite self-demonstrably true.

mort96

I would really like to see your definition of general intelligence and argument for why humans don't fit it.

ninetyninenine

Colloquially anything that matches humans in general intelligence and is built by us is by definition an agi and generally intelligent.

Humans are the bar for general intelligence.

umanwizard

How so?

deadbabe

First of all, math isn’t real any more than language isn’t real. It’s an entirely human construct, so it’s possible you cannot reach AGI using mathematical means, as math might not be able to fully express it. It’s similar to how language cannot fully describe what a color is, only vague approximations and measurements. If you wanted to create the color green, you cannot do it by describing various properties, you must create the actual green somehow.

hnfong

As a somewhat colorblind person, I can tell you that the "actual green" is pretty much a lie :)

It's a deeply philosophical question what constitutes a subjective experience of "green" or whatever... but intelligence is a bit more tractable IHO.

Workaccount2

I don't think it would be unfair to accept the brain state of green as an accurate representation of green for all intents and purposes.

Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.

like_any_other

Fair enough. But then, AGI wouldn't really be based on math, but on physics. Why would an artificially-constructed physical system have (fundamentally) different capabilities than a natural one?

ImHereToVote

Humans use soul juice to connect to the understandome. Machines can't connect to the understandome because of Gödels incompleteness, they can only make relationships between tokens. Not map them to reality like we can via magic.

Workaccount2

Stochastic parrots all the ways down

https://ai.vixra.org/pdf/2506.0065v1.pdf

add-sub-mul-div

My take is that it transcends any science that we'll understand and harness in the lifetime of anyone living today. It for all intents and purposes transcends science from our point of view, but not necessarily in principle.

moktonar

Technically this is linked to the ability to simulate our universe efficiently. If it’s simulable efficiently then AGI is possible for sure, otherwise we don’t know. Everything boils down to the existence or not of an efficient algorithm to simulate Quantum Physics. At the moment we don’t know any except using QP itself (essentially hacking the Universe’s algorithm itself and cheating) with Quantum Computing (that IMO will prove exponentially difficult to harness, at least the same difficulty as creating AGI). So, yes, brains might be > computers.

agitracking

I always wondered how much of human intelligence can be mapped to mathematics.

Also, interesting timing of this post - https://news.ycombinator.com/item?id=44348485