kscarlet
The arguments only apply to hand written programs and a particular flavor of AI (symbolicist AI) which has long fallen out of favor. They don't apply to most AI agent today. They definitely do not apply to reinforcement learning, and probably do not apply to any neural network. It says nothing that differentiate RL agents from living organisms (maybe there really isn't much difference anyways).
I can't help to point out the title really seems to suggest something the article is not claiming at all. In particular, it says nothing about whether most of today's AI agent approach can lead to agency and cognition.
mindwok
This is basically a PhD level rant. There's nothing really empirical in here that makes a convincing argument for why a machine could never "understand" why life is precious. Argument via dense word salad doesn't really sway me.
jgord
[flagged]
null
margorczynski
What would be the official definition of agency and cognition? And I mean without the usual pseudo-philosophy and hand waving that you see when discussing such topics.
This is a pretty weak argument IMHO. They constraint their definition of algorithms beyond even their own already constrained definition below:
> In contrast, algorithms—broadly defined as automated computational procedures, i.e., finite sets of symbols encoding operations that can be executed on a universal Turing machine—exist in a “small world” (Savage, 1954). They do so by definition, since they are embedded and implemented within a predefined formalized ontology (intuitively: their “digital environment” or “computational architecture”), where all problems are well-defined. They can only mimic (emulate, or simulate) partial aspects of a large world: algorithms cannot identify or solve problems that are not precoded (explicitly or implicitly) by the rules that characterize their small world (Cantwell Smith, 2019). In such a world, everything and nothing is relevant at the same time.
If instead you define an algorithm as "anything that can be computed by a Turing machine" we see this "problem of relevance" which is at the heart of their argument drops away.
Even comparatively simple algorithms (eg BM25) can handle relevance very well, and LLMs can do even better at the expense of more compute.
While incomplete, I'd argue these are indicative of the weakness of their arguments.