Skip to content(if available)orjump to list(if available)

Fully autonomous AI agents should not be developed

geor9e

Too late. I added a 5 minute cron job for cursor AI's compose tab in agent mode that keeps replying "keep going, think of more fixes and features, random ideas as fine, do it all for me". I won't pull the plug.

djohnston

How do you programmatically interact w cursor??

geor9e

Same way I botted runescape as a child, by simulating user inputs with any macro app.

voisin

You’ve created a monster

rickydroll

you say monster, I say plastic pal who's fun to be with

fragmede

AGI confirmed.

upghost

This is a purely procedural question, not supporting or critiquing in any way-- other than this reads kind of like an editorial with the format of a scientific paper. The question is... are there rules about what constitutes a paper or can you just put whatever you want in there as long as you follow "scientific paper format"?

dhruvbatra

This looks like ICML formatting (and the submission deadline just passed).

ICML25 has an explicit call for position papers: https://icml.cc/Conferences/2025/CallForPositionPapers

upghost

Wow, great observation. Thank you. Makes sense. I'd never heard of a "position paper" before.

mark_l_watson

I really enjoy Margaret Mitchell‘s podcast (she is the first author on the paper), and perhaps I missed something important in the paper, but:

Shouldn’t we treat separately autonomous agent we write ourselves, or purchase to run on our own computers, on our own data and that use public APIs for data?

If Margaret is reading this thread, I am curious what her opinion is.

For autonomous agents controlled by corporations and governments, I mostly agree with the paper.

in3d

I'd recommend looking for other sources of information if you're relying on someone who co-authored the paper that introduced the most misleading and uninformed term of the LLM era: "stochastic parrot".

currymj

it was a pretty defensible term at the time the paper came out, in the context of how LLMs were being trained and used.

in this paper, it's clear that the authors don't think modern LLM-based systems are just stochastic parrots.

bamboozled

People are going to be developing these no matter what. Whether it wipes us out or not is just up to fate really.

esafak

We can constrain their use, as with nuclear materials.

fizx

Nuclear materials have the advantages of being rare, dangerous to handle, and hard to copy over the internet.

johanneskanybal

No not really. There's no power in the world that can restrain this in it's current form even mildly much less absolutly. Why do you think that would be even slightly possible?

esafak

For the same reason we can regulate other things? Encryption is regulated, for example. There "just" needs to be international co-operation, in the case of AI.

roenxi

Despite doing a pretty decent job of containing the risk we're still on the clock until something terrible happens with nuclear war. Humanity appears to be well on track to killing millions to billions of people; rolling the dice relatively regularly waiting for a 1% chance to materialize.

If we only handle AI that well doom is probable. It has economic uses, unlike nuclear weapons, so there will be a thriving black market dodging the safety concerns.

redeux

At some point in the probably near future it will be much simpler to create an autonomous AI agent than a nuclear bomb.

esafak

True, so we need to make sure we don't find ourselves in a mess before it happens. Right now I don't see nearly enough concern given to risk management in industry. The safeguards companies put on their models are trivially subverted by hackers. We don't even know how to cope with an AI that would attempt to subvert its own constitution.

hollerith

So let's avoid that future.

bamboozled

Look at who has access to US nuclear codes now. I don’t believe it’s as constrained as you think.

gcanyon

It is a lot easier to detect illicit nuclear work compared to illicit AI work.

hollerith

It is hard to hide anything that uses as much electricity as a large training run.

Also there are only a few companies that can fab the semiconductors needed for these training runs.

ASalazarMX

In the incredible case that we develop fully autonomous agents capable of crippling the world, that would mean we developed fully autonomous agents capable of keeping it safe.

Unless the first one is so advanced no other can challenge it, that is.

grayfaced

How did you jump to that conclusion? The agent will be limited by the capabilities under its control. We have the technological ability to cripple world now and we don't have the technological means to prevent it. Give one AI control of the whole US arsenal and the objective of ending the world. Give another AI the capabilities of the rest of the world and the objective of protecting it. Would you feel safe?

ASalazarMX

> We have the technological ability to cripple world now and we don't have the technological means to prevent it

Humans have prevented it many times, but not specifically by technological ability. If Putin/Trump/Xi Ping wanted a global nuclear war, they'd better have the means to launch the nukes themselves in secret because the chain of command will challenge them.

If an out-of-control AI could discover a circuitous way to access nukes, an antagonist AI of equal capabilities should be able to figure it out too, and warn the humans in the loop.

I agree that AI development should be made responsibly, but not all people do, and it's impossible to put the cat back in the bag. The limiting factor these days is hardware, as a true AGI will likely need even more of it than our current LLMs.

wendyshu

Fallacious

satisfice

No one should be allowed to develop software that has bugs in it that lead to unlawful harm to others. And if they do it anyway they should be punished lawfully.

The thing with autonomous AI is that we already know it cannot be made safe in a way that satisfies lawmakers who are fully informed about how it works… unless they are bribed, I suppose.

Animats

Most of the arguments presented also apply to corporations.

There's no mention of externalities. That is, are the costs of AI errors borne by the operator of the AI, or a third party.

numba888

Hmm.. agent cannot do self-supervised learning without actually doing it. The trick is to keep it in a sandbox.

asdasdsddd

This has to be the least interesting paper I've ever read with the most surface level thinking.

> • Simple→Tool Call: Inaccuracy propagated to inappropriate tool selection.

> • Multi-step: Cascading errors compound risk of inaccurate or irrelevant outcomes.

> • Fully Autonomous: Unbounded inaccuracies may create outcomes wholly unaligned with human goals.

Just... lol

null

[deleted]

bdangubic

the best way to get people to stop doing X is to tell them not to do X. works so well with my kid :)

rtcode_io

Yet, we all know we will!

lysace

Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values.