Agents Are Not Enough
116 comments
·January 6, 2025tonetegeatinst
wkat4242
Totally agree. An LLM won't be an AGI.
It could be part of an AGI, specifically the human interface part. That's what an LLM is good at. The rest (knowledge oracle, reasoning etc) are just things that kinda work as a side-effect. Other types of AI models are going to be better at that.
It's just that since the masses found that they can talk to an AI like a human they think that it's got human capabilities too. But it's more like fake it till you make it :) An LLM is a professional bullshitter.
Terr_
> It's just that since the masses found that they can talk to an AI like a human
In a way it's worse: Even the "talking to" part is an illusion, and unfortunately a lot of technical people have trouble remembering it too.
In truth, the LLM is an idiot-savant which dreams up "fitting" additions to a given document. Some humans have prepared a document which is in the form of a a theater-play or a turn-based chat transcript, with a pre-written character that is often described as a helpful robot. Then the humans launch some code that "acts out" any text that looks like it came from that fictional character, and inserts whatever the real-human-user types as dialogue for the document's human-character.
There's zero reason to believe that the LLM is "recognizing itself" in the story, or that is is choosing to self-insert itself into one of the characters. It's not having a conversation. It's not interacting with the world. It's just coded to Make Document Bigger Somehow.
> they think that it's got human capabilities too
Yeah, we easily confuse the character with the author. If I write an obviously-dumb algorithm which slaps together a story, it's still a dumb algorithm no matter how smart the robot in the story is.
lugu
I am not sure what you mean by LLM when you say they are professional bullshitter. While it was certainly true for model based on transformers just doing inference, recent models have progressed significantly.
Terr_
> I am not sure what you mean by LLM when you say they are professional bullshitter.
Not parent-poster, but an LLM is a tool for extending a document by choosing whatever statistically-seems-right based on other documents, and it does so with no consideration of worldly facts and no modeling of logical prepositions or contradictions. (Which also relates to math problems.) If it has been fed on documents with logic puzzles and prior tests, it may give plausible answers, but tweaking the test to avoid the pattern-marching can still reveal that it was a sham.
The word "bullshit" is appropriate because human bullshitter is someone who picks whatever "seems right" with no particular relation to facts or logical consistency. It just doesn't matter to them. Meanwhile, a "liar" can actually have a harder job, since they must track what is/isn't true and craft a story that is as internally-consistent as possible.
Adding more parts around and LLM won't change that: Even if you add some external sensors, a calculator, a SAT solver, etc. to create a document with facts in it, once you ask the LLM to make the document bigger, it's going to be bullshitting the additions.
daxfohl
There's a _lot_ of smoke and mirrors. Paste a sudoku into chatgpt and ask it to solve. Amazing, it does it perfectly! Of course that's because it ran a sudoku-solving program that it pulled off github.
Now ask it to solve step by step by pure reasoning. You'll get a really intelligent sounding response that sounds correct, but on closer inspection makes absolutely no sense, every step has ridiculous errors like "we start with options {1, 7} but eliminate 2, leaving only option 3", and then at the end it just throws all that out and says "and therefore ..." and gives you the original answer.
That tells me there's essentially zero reasoning ability in these things, and anything that looks like reasoning has been largely hand-baked into it. All they do on their own is complete sentences with statistically-likely words. So yeah, as much as people talk about it, I don't see us as being remotely close to AGI at this point. Just don't tell the investors.
conception
On the other side of the coin, I think people also underestimate the amount of human thinking and intelligence is just completing statistically likely words. Most actions and certainly reactions people do everyday involve very little reasoning. Instead just following the most used neuron.
phaedrus
But the G in AGI stands for General. I think the hope is that there is some as-yet-undiscovered algorithm for general intelligence. While I agree that deferring to a subsystem that is an expert in that type of problem is the best way to handle problems, I would hope that it is possible that that central coordinator not just be able to delegate but design new subsystems as needed. Otherwise what happens when you run out of types of expert problem solvers to use (and still haven't solved the problem well)?
One might argue maybe a mixture of experts is just the best that can be done - and that it's unlikely the AGI be able to design new experts itself. However where do the limited existing expert problem solvers come from? Well - we invented them. Human intelligences. So to argue that an AGI could NOT come up with its own novel expert problem solvers implies there is something ineffable about human general intelligence that can't be replicated by machine intelligence (which I don't agree with).
lukeplato
I don't see why a mixture of experts couldn't be distilled into a single model and unified latent space
energy123
You could, but in many cases you wouldn't want to. You will get superior results with a fixed compute budget by relying on external tool use (where "tool" is defined liberally, and can include smaller narrow neural nets like GraphCast & AlphaGo) rather that stuffing all tools into a monolithic model.
daxfohl
Isn't that what the original resnet project disproved? Rather than trying to hand-manicure what the NN should look for, just make it deep enough and give it enough training data, and it'll figure things out on its own, even better than if we told it what to look out for.
Of course, cost-wise and training time wise, we're probably a long way off from being able to replicate that in a general purpose NN. But in theory, given enough money and time, presumably it's possible, and conceivably would produce better results.
zaroth
Exactly what DeepSeek3 is doing.
pton_xd
> However I would argue an LLM will never built a trained model like stockfish or deepseek.
It doesn't have to, the LLM just needs access to a computer. Then it can write the code for Stockfish and execute it. Or just download it, the same way you or I would.
> True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem.
Yep, but I don't see how that relates to LLMs not reaching AGI. They can already write basic Python scripts to answer questions, they just need (vastly) more advanced scripting capabilities.
Upvoter33
"If AGI ... is possible"
I don't get this line of thinking. AGI already exists - it's in our heads!
So then the question is: is what's in our heads magic, or can we build it? If you think it's magic, fine - no point arguing. But if not, we will build it one day.
trescenzi
GI is in our heads. The A is artificial which means built by humans. They are asking the same question you are.
nuancebydefault
Indeed! That's what I have been thinking for a while but I never had the occasion and or breath to write it down, and you explained it concisely. Finally some 'confirmation' 'bias'...
rnr25
[dead]
georgestrakhov
IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually.
So we should think about these things in terms of how much agency are we willing to give away in each case and for what gain[1].
Then the ecosystem question that the paper is trying to solve will actually solve itself, because it is already the case today that in many processes agency has been outsourced almost fully and in others - not at all. I posit that this will continue, just expect a big change of ratios and types of actions.
[1] https://essays.georgestrakhov.com/artificial-agency-ladder/
HarHarVeryFunny
An agent, or something that has agency, is just something that takes some action, which could be anything from a thermostat regulating the temperature all the way up to an autonomous entity such as an animal going about it's business.
Hugging Face have their own definitions of a few different types of agent/agentic system here:
https://huggingface.co/docs/smolagents/en/conceptual_guides/...
As related to LLMs, it seems most people are using "agent" to refer to systems that use LLMs to achieve some goal - maybe a fairly narrow business objective/function that can be accomplished by using one or more LLMs as a tool to accomplish various parts of the task.
w10-1
> IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually
Yes, the term is becoming ambiguous, but that's because it's abstracting out the part of AI that is most important and activating: the ability to work both independently and per intention/need.
Per the paper: "Key characteristics of agents include autonomy, programmability, reactivity, and proactiveness.[...] high degree of autonomy, making decisions and taking actions independently of human intervention."
Yes, "the ecosystem will evolve," but to understand and anticipate the evolution, one needs a notion of fitness, which is based on agency.
> So we should think about these things in terms of how much agency are we willing to give away in each case
It's unclear there can be any "we" deciding. For resource-limited development, the ecosystem will evolve regardless of our preferences or ethics according to economic advantage and capture of value. (Manufacturing went to China against the wishes of most everyone involved.)
More generally, the value is AI is not just replacing work. It's giving more agency to one person, avoiding the cost and messiness of delegation and coordination. It's gaining the same advantages seen where smaller team can be much more effective than a larger one.
Right now people are conflating these autonomy/delegation features with the extension features of AI agents (permitting them to interact with databases or web browsers). The extension vendors will continue to claim agency because it's much more alluring, but the distinction will likely become clear in a year or so.
paulryanrogers
> Manufacturing went to China against the wishes of most everyone involved
Certainly those in China and the executive suites of Western countries wished it, and made it happen. Arguably the western markets wanted it too when they saw the prices dropping and offerings growing.
AI isn't happening in a vacuum. Shareholders and customers are buying it.
ocean_moist
Maybe I just don’t understand the article but I really have 0 clue how they go about making their conclusions and really don’t understand what they are saying.
I think the 5 issues they provide under “Cognitive Architectures” are severely underspecified to the point where they really don’t _mean_ anything. Because the issues are so underspeficifed I don’t know how their proposed solution solves their proposed problems. If I understand it correctly, they just want agents (Assistants/Agents) with user profiles (Sims) on an app store? I’m pretty sure this already exists on the ChatGPT store. (sims==memories/user profiles, agents==tools/plugins, assistants==chat interface)
This whole thing is so broad and full of academic (pejorative) platitudes that it’s practically meaningless to me. And of course although completely unrelated they through a reference into symbolic systems. Academic theater.
spiderfarmer
This is publishing for the sake of publishing.
antisthenes
It's a 4-page paper trying to give a summary of 40+ years of research on AI.
Of course it's going to be vague and presumptuous. It's more of a high-level executive summary for tech-adjacent folks than an actual research paper.
TaurenHunter
"More Agents is all you need" https://arxiv.org/abs/2402.05120
I could not find a "Agents considered harmful" related to AI, but there is this one: "AgentHarm: A benchmark for measuring harmfulness of LLM agents" https://arxiv.org/pdf/2410.09024
This "Agents considered harmful" is not AI-related: https://www.scribd.com/document/361564026/Math-works-09
ksplicer
When reading anthropics blog on agents I basically took away that their advice is you shouldn't use them to solve most problems.
https://www.anthropic.com/research/building-effective-agents
"For many applications, however, optimizing single LLM calls with retrieval and in-context examples is usually enough."
retinaros
True this was also my conclusion in October. Most of the complexity we are building is to fight against the limitations of LLMs. If in some way we could embed all our tools in a single call and have the LLM successfully figure out which tools to call then that would be it and we wouldn’t need any of those frameworks or libraries. But it turns out the reality of agents and tool use is pretty stark and you wouldn’t know that looking at the AI influencer spamming X, Linkedin, Youtube
However The state of agents slightly changed and while we had 25% accuracy in multiturn conversations we re now at 50.
kridsdale1
Morpheus taught me they are quite harmful.
dist-epoch
Real agents have never been tried
bob1029
I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system. Human (domain expert) in the loop gives you a reasonable chance of recovering from hallucinations before they spiral entirely out of control.
"LLM as UI" seems to be something hanging pretty low on the tree of opportunity. Why spent months struggling with complex admin dashboard layouts and web frameworks when you could wire the underlying CRUD methods directly into LLM prompt callbacks? You could hypothetically make the LLM the exclusive interface for managing your next SaaS product. There are ways to make this just as robust and secure as an old school form punching application.
barrkel
It's quite tedious to have to write (or even say) full sentences to express intent. Imagine driving a car with a voice interface, including accelerator, brake, indicators and so on. Controls are less verbose and dashboards are more information rich than linear text.
It's difficult to be precise. Often it's easier to gauge things by looking at them while giving motor feedback (e.g. turning a dial, pushing a slider) than to say "a little more X" or "a bit less Y".
Language is poorly suited to expressing things in continuous domains, especially when you don't have relevant numbers that you can pick out of your head - size, weight, color etc. Quality-price ratio is a particularly tough one - a hard numeric quantity traded off against something subjective.
Most people can't specify up front what they want. They don't know what they want until they know what's possible, what other people have done, started to realize what getting what they want will entail, and then changed what they want. It's why we have iterative development instead of waterfall.
LLMs are a good start and a tool we can integrate into systems. They're a long, long way short of what we need.
GiorgioG
re: LLM as UI: Given that I don't trust LLMs to be deterministic, I wouldn't trust them to make the correct API call every time I tell it to do X.
kgeist
I think most users have a fixed set of workflows which usually don't change from day to day, so why not just use LLMs as a macro builder with a natural language interface (and which doesn't require you to know the product's UI well beforehand):
- you ask LLM to build a workflow for your problem
- the LLM builds the workflow (macro) using predefined commands
- you review the workflow (can be an intuitive list of commands, understandable by non-specialist) - to weed out hallucinations and misunderstanding
- you save the workflow and can use it without any LLM agents, just clicking a button - pretty determenistic and reliable
Advantages:
- reliable, deterministic
- you don't need to learn a product's UI, you just formulate your problem using natural language
shekhargulati
This is the same approach we took when we added LLM capability to a low code tool Appian. LLM helped us generate the Appian workflow configuration file, user reviews it and make changes if required, and then finally publishes it.
bob1029
> you review the workflow (can be an intuitive list of commands, understandable by non-specialist) - to weed out hallucinations and misunderstanding
This is the idea that is most valuable from my perspective of having tried to extract accurate requirements from the customer. Getting them to learn your product UI and capabilities is an uphill battle if you are in one of the cursed boring domains (banking, insurance, healthcare, etc.).
Even if the customer doesn't get the LLM-defined path to provide their desired final result, you still have their entire conversation history available to review. This seems more likely to succeed in practice than hoping the customer provides accurate requirements up-front in some unconstrained email context.
nyrikki
So visual programming x.0?
I am pretty sure PLCs with ladder logic are about the limits of the traditional visual/macro model?
Word-sense disambiguation is going to be problematic with the 'don't need to learn' part above.
Consider this sentence:
'I never said she stole my money'
Now read that sentence multiple times, puting emphasis on each word, one at a time and notice how the symantic meaning changes.
LLMs are great at NLP, but we still don't have solutions to those NLU problems that I am aware of.
I think to keep maximum generality without severely restricted use cases that a common DSL would need to be developed.
There will have to be tradeoffs made, specific to particular use cases, even if it is better than Alexa.
But I am thinking about Rice's theorm and what happens when you lose PEM.
Maybe I just am too embedded in an area where these problems are a large part of the difficulty for macro style logic to provide much use.
dingnuts
>- you review the workflow (can be an intuitive list of commands, understandable by non-specialist)
so you define a DSL that the LLM outputs, and that's the real UI
>- you don't need to learn a product's UI, you just formulate your problem using natural language
yes, you do. You have to learn the DSL you just manifested so that you can check it for errors. Once you have the ability to review the LLM's output, you will also have the ability to just write the DSL to get the desired behavior, at which point that will be faster unless it's a significant amount of typing, and even then, you will still need to review the code generated by the LLM, which means you have to learn and understand the DSL. I would much rather learn a GUI than a DSL.
You haven't removed the UI, nor have you made the LLM the UI, in this example. The DSL ("intuitive list of commands.. I guess it'll look like the Robot Framework right? that's what human-readable DSLs tend to look like in practice) is the actual UI.
This is vastly more complicated than having a GUI to perform an action.
hitchstory
I dont either, but this can be mitigated by adding guard rails (strictly validating input), double checking actions with the user and using it for tasks where a mistake isnt world ending.
Even then mistakes can slip through, but it could still be more reliable than a visual UI.
There are lots of horrible web UIs i would LOVE to replace with a conversational LLM agent. No #1 is jira and so is no #2 and #3.
deadbabe
They are deterministic at 0 temperature
lokhura
At zero temp there is still non-determism due to sampling and the fact that floating point addition is not commutative so you will get varying results due to parallelism.
BalinKing
(Disclaimer: I know literally nothing about LLMs.) Wouldn't there still be issues of sensitivity, though? Like, wouldn't you still have to ensure that the wording of your commands stays exactly the same every time? And with models that take less discrete data (e.g. ChatGPT's new "advanced voice model" that works on audio directly), this seems even harder.
wkat4242
They are pretty deterministic then but they are also pretty useless at 0 temperature.
ukuina
Not for the leading LLMs from OpenAI and Anthropic.
pwillia7
I had the same epiphany about LLM as UI trying to build a front end for a image enhancer workflow I built with Stable Diffusion. I just about fully built out a Chrome extension and then realized I should just build a 'tool' that llama can interact with and use open webui as the front end.
quick demo: https://youtu.be/2zvbvoRCmrE
diggan
> I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system
That's a funny definition to me, because doing so would mean the LLM is the agent, if you use the classic definition for "user-agent" (as in what browsers are). You're basically inverting that meaning :)
klabb3
> "LLM as UI" seems to be something hanging pretty low on the tree of opportunity.
Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.
> Why spent months struggling with complex admin dashboard layouts
You can throw something together, and even auto generate forms based on an API spec. People don’t do this too often because the UX is insufficient even for many internal/domain expert support applications. But you could and it would be deterministic, unlike an LLM. If the API surface is simple, you can make it manually with html & css quickly.
Overuse of web frameworks has completely different causes than ”I need a functional thing” and thus it cannot be solved with a different layer of tech like LLMs, NFTs or big data.
wkat4242
> Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.
No this is because they use the LLM not only as human interface but also as a reasoning engine for troubleshooting. And give it way less capability than a human agent to boot. So all it can really do is serve FAQs and route to real support.
In this case the fault is not with the LLM but with the people that put it there.
beezle
For those who dont want to down load the PDF directly and prefer to start with the abstract: https://arxiv.org/abs/2412.16241
danielmarkbruce
Why post this paper? It says nothing, it's a waste of people's time to read.
duxup
Even just the definition of an Agent (maybe imperfect) made it worthwhile for me.
danielmarkbruce
I'm not sure it's even good though... the input doesn't need to come from a user. I have an "agent" which listens for an event in financial markets and then goes and does some stuff.
In practice the current usage of "agent" is just: a program which does a task and uses an LLM somewhere to help make a decision as to what to do and maybe uses an LLM to help do it.
joshka
https://www.arxiv.org/abs/2412.16241 is the non-pdf version of this @dang can you please replace the link?
pwillia7
How would the SIMS that contain the user prefs and whatnot not have the same issues described in the paper as the agents themselves?
asciii
Diabolical - I love it. Impressed that the final score came up as an alert!
syntex
Why does this have so many upvotes? Is this the current state of research nowadays?
coro_1
The paper covers technical details and the logistics of AI Agents to come. But how are humans going to react to mass AI Agents replacing other human emotion and connection? Bias is central in tech-culture to only agents, but this could become an issue.
Somewhat related but here's my take on super intelligence or AGI. I have worked with CNN,GNN and other old school AI methods, but don't have the resources to build a real SOT LLM, but I do use and tinker with LLM's occasionally.
If AGI or SI(super intelligence)/is possible, and that is an if...I don't think LLM's are going to be this silver bullet solution Just as we have in the real world of people who are dedicated to a single task in their field like a lawyer or construction workers or doctors and brain surgeons, I see the current best path forward as being a "mixture of experts". We know LLM's are pretty good for what iv seen some refer to as NLP problems, where the model input is the tokenized string input. However I would argue an LLM will never built a trained model like stockfish or deepseek. Certain model types seem to be suited to certain issues/types of problems or inputs. True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem. We still do not know if it is possible to combine the knowledge of different types of neural networks like LLMs, convolutional neural networks, and deep learning...and while its certainly worth exploring, it is foolish to throw all hope on a single solution approach. I think the first step would be to create a new type of model where given a problem of any type. It knows the best method to solve it. And it doesn't rely on itself but rather the mixture of agents or experts. And they don't even have to be LLMs. They could be anything.
Where this really would explode is, if the AI was able to identify a problem that it can't solve and invent or come up with a new approach, multiple approaches, because we don't have to be the ones who develop every expert.