Skip to content(if available)orjump to list(if available)

Agents Are Not Enough

Agents Are Not Enough

164 comments

·January 6, 2025

tonetegeatinst

Somewhat related but here's my take on super intelligence or AGI. I have worked with CNN,GNN and other old school AI methods, but don't have the resources to build a real SOT LLM, but I do use and tinker with LLM's occasionally.

If AGI or SI(super intelligence)/is possible, and that is an if...I don't think LLM's are going to be this silver bullet solution Just as we have in the real world of people who are dedicated to a single task in their field like a lawyer or construction workers or doctors and brain surgeons, I see the current best path forward as being a "mixture of experts". We know LLM's are pretty good for what iv seen some refer to as NLP problems, where the model input is the tokenized string input. However I would argue an LLM will never built a trained model like stockfish or deepseek. Certain model types seem to be suited to certain issues/types of problems or inputs. True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem. We still do not know if it is possible to combine the knowledge of different types of neural networks like LLMs, convolutional neural networks, and deep learning...and while its certainly worth exploring, it is foolish to throw all hope on a single solution approach. I think the first step would be to create a new type of model where given a problem of any type. It knows the best method to solve it. And it doesn't rely on itself but rather the mixture of agents or experts. And they don't even have to be LLMs. They could be anything.

Where this really would explode is, if the AI was able to identify a problem that it can't solve and invent or come up with a new approach, multiple approaches, because we don't have to be the ones who develop every expert.

wkat4242

Totally agree. An LLM won't be an AGI.

It could be part of an AGI, specifically the human interface part. That's what an LLM is good at. The rest (knowledge oracle, reasoning etc) are just things that kinda work as a side-effect. Other types of AI models are going to be better at that.

It's just that since the masses found that they can talk to an AI like a human they think that it's got human capabilities too. But it's more like fake it till you make it :) An LLM is a professional bullshitter.

Terr_

> It's just that since the masses found that they can talk to an AI like a human

In a way it's worse: Even the "talking to" part is an illusion, and unfortunately a lot of technical people have trouble remembering it too.

In truth, the LLM is an idiot-savant which dreams up "fitting" additions to a given document. Some humans have prepared a document which is in the form of a a theater-play or a turn-based chat transcript, with a pre-written character that is often described as a helpful robot. Then the humans launch some code that "acts out" any text that looks like it came from that fictional character, and inserts whatever the real-human-user types as dialogue for the document's human-character.

There's zero reason to believe that the LLM is "recognizing itself" in the story, or that is is choosing to self-insert itself into one of the characters. It's not having a conversation. It's not interacting with the world. It's just coded to Make Document Bigger Somehow.

> they think that it's got human capabilities too

Yeah, we easily confuse the character with the author. If I write an obviously-dumb algorithm which slaps together a story, it's still a dumb algorithm no matter how smart the robot in the story is.

jdonaldson

Just wanted to point out that the notion of a "document" is also an illusion to the LLM. It's processing a sequence of low dimensional spaces into another sequence of low dimensional spaces. The input spaces preserve aspects of content similarity based on co-occurrence. The model learns to transform these spaces into higher order spaces based on the outcome of training.

You couldn't say that the model has a singular sense of self, but it certainly has been trained on data that allows it to mimic it in short spurts, and mimicry is what humans do to learn more complex/abstract tasks. The training goal is not to learn how to "be", but rather to learn how to "do" the parts necessary to continue existing.

"Fake it till you make it" is really all that's required to exist in the world.

skrebbel

> In truth, the LLM is an idiot-savant which dreams up "fitting" additions to a given document.

Tbh I'm not too sure that my brain works fundamentally differently. I'm an idiot-savant who responds to stimuli.

lugu

I am not sure what you mean by LLM when you say they are professional bullshitter. While it was certainly true for model based on transformers just doing inference, recent models have progressed significantly.

Terr_

> I am not sure what you mean by LLM when you say they are professional bullshitter.

Not parent-poster, but an LLM is a tool for extending a document by choosing whatever statistically-seems-right based on other documents, and it does so with no consideration of worldly facts and no modeling of logical prepositions or contradictions. (Which also relates to math problems.) If it has been fed on documents with logic puzzles and prior tests, it may give plausible answers, but tweaking the test to avoid the pattern-marching can still reveal that it was a sham.

The word "bullshit" is appropriate because human bullshitter is someone who picks whatever "seems right" with no particular relation to facts or logical consistency. It just doesn't matter to them. Meanwhile, a "liar" can actually have a harder job, since they must track what is/isn't true and craft a story that is as internally-consistent as possible.

Adding more parts around and LLM won't change that: Even if you add some external sensors, a calculator, a SAT solver, etc. to create a document with facts in it, once you ask the LLM to make the document bigger, it's going to be bullshitting the additions.

daxfohl

There's a _lot_ of smoke and mirrors. Paste a sudoku into chatgpt and ask it to solve. Amazing, it does it perfectly! Of course that's because it ran a sudoku-solving program that it pulled off github.

Now ask it to solve step by step by pure reasoning. You'll get a really intelligent sounding response that sounds correct, but on closer inspection makes absolutely no sense, every step has ridiculous errors like "we start with options {1, 7} but eliminate 2, leaving only option 3", and then at the end it just throws all that out and says "and therefore ..." and gives you the original answer.

That tells me there's essentially zero reasoning ability in these things, and anything that looks like reasoning has been largely hand-baked into it. All they do on their own is complete sentences with statistically-likely words. So yeah, as much as people talk about it, I don't see us as being remotely close to AGI at this point. Just don't tell the investors.

conception

On the other side of the coin, I think people also underestimate the amount of human thinking and intelligence is just completing statistically likely words. Most actions and certainly reactions people do everyday involve very little reasoning. Instead just following the most used neuron.

seadan83

Human vision works this way. To fix the latency problem (actual event hff happening vs signal transmitted to your brain) human vision is constantly predicting what you should see, your brain tells you that is what you saw (the prediction), and then the brain does reconciliation after the fact. Your brain will scramble for coherency when prediction and reality do not match. This trickery is why it seems like you see events in real time, when there is actually a significant delay between event and perception.

Though, there are error correction mechanisms, systems for validation, and a coherent underlying model of the world that is used by tthee brain.

FWIW, it is likely the most used set of neuron connections, sets of millions in play and their interconnections being the important part. That subset being one of billions of others with thousands of connections between each neuron - keep in mind it is not the set of neurons firing that matters, but the set of connections firing. The set of connections is a vastly large number.

Like, if you have three neurons, your brain can encode 10 data points. Let's call these A, B,C. A firing and terminating is one (so three for each), each edge, eg A to B is another three, each set of two edges, eg A to B to C (three more), and all three edges for one more. Then keep in mind you have billions of neurons and they are each interconnected by the thousands.

ilbeeper

Citation needed. The word reasoning isn't describing everything that the brain does, and "just following the most used neuron" is not even wrong.

pton_xd

> However I would argue an LLM will never built a trained model like stockfish or deepseek.

It doesn't have to, the LLM just needs access to a computer. Then it can write the code for Stockfish and execute it. Or just download it, the same way you or I would.

> True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem.

Yep, but I don't see how that relates to LLMs not reaching AGI. They can already write basic Python scripts to answer questions, they just need (vastly) more advanced scripting capabilities.

lukeplato

I don't see why a mixture of experts couldn't be distilled into a single model and unified latent space

energy123

You could, but in many cases you wouldn't want to. You will get superior results with a fixed compute budget by relying on external tool use (where "tool" is defined liberally, and can include smaller narrow neural nets like GraphCast & AlphaGo) rather that stuffing all tools into a monolithic model.

daxfohl

Isn't that what the original resnet project disproved? Rather than trying to hand-manicure what the NN should look for, just make it deep enough and give it enough training data, and it'll figure things out on its own, even better than if we told it what to look out for.

Of course, cost-wise and training time wise, we're probably a long way off from being able to replicate that in a general purpose NN. But in theory, given enough money and time, presumably it's possible, and conceivably would produce better results.

zaroth

Exactly what DeepSeek3 is doing.

phaedrus

But the G in AGI stands for General. I think the hope is that there is some as-yet-undiscovered algorithm for general intelligence. While I agree that deferring to a subsystem that is an expert in that type of problem is the best way to handle problems, I would hope that it is possible that that central coordinator not just be able to delegate but design new subsystems as needed. Otherwise what happens when you run out of types of expert problem solvers to use (and still haven't solved the problem well)?

One might argue maybe a mixture of experts is just the best that can be done - and that it's unlikely the AGI be able to design new experts itself. However where do the limited existing expert problem solvers come from? Well - we invented them. Human intelligences. So to argue that an AGI could NOT come up with its own novel expert problem solvers implies there is something ineffable about human general intelligence that can't be replicated by machine intelligence (which I don't agree with).

vrighter

Once I was high and thought of hallucinations as "noise in the output". From that perspective, and the fact that LLMs are probabilistic machines, then halving the noise would probably involve 4x the computation needed. Which seems to track what I observe. Models are getting MUCH larger, but performance is practically at a standstill.

Upvoter33

"If AGI ... is possible"

I don't get this line of thinking. AGI already exists - it's in our heads!

So then the question is: is what's in our heads magic, or can we build it? If you think it's magic, fine - no point arguing. But if not, we will build it one day.

jerojero

The brain is such an intractable web of connections that it has been really difficult to properly make sense of it.

We can't really talk too much about the differences between the intelligence of a dog and the intelligence of a human; in real terms. It seems as though humans might have more connections, different types of cells but then again; there's species out there that also have types of neurons we don't have and more dense regions in areas of the brain than we do.

And on top of that, dive into a single neuron and you will find a world of complexity. The reason why a neuron might fire or not given a stimuli is an extremely complicated and often stochastic process; that's actually one of the reasons why we use non-linearities in the neural networks we create. But how nuance are we really capturing?

The reason we do mathematics the way we do has well studied neurological patterns, we come out of the box with understandings of the world. And many animals do, actually, similar neurological patterns are found in different species.

It's incredible to think of the precision and the complexity of the tasks a fly undertakes during their life, and we actually have mapped the entire brain (if we can call it that, i would) of a fly. Every neuron and every connection the fly has. There's experiments done with neural networks where we've tried to imitate these (the brain of a fly has less parameters [number of nodes and edges] than modern LLMs) with very interesting results. But can we say we understand them? Not really.

And finally, I want to bring up something that's not usually considered when it comes to these things but there's a lot of processes at the molecular level in our cells that actually make use of quantum mechanics, there's a whole field of biology that's dedicated to studying these processes. So yeah, I mean, maybe we can build it but first we need to understand what's going on and why, I believe.

bee_rider

What processes in our cells make use of quantum mechanics? (I mean in some sense everything is quantum mechanics, but cells are quite big in a quantum mechanics sense. I’d imagine they are mostly classical).

seadan83

Expert beginner problem. If you can count a grain of sand, and measure the distance of one centimeter, then surely you can measure the exact length of a coastline and count the exact number grains of sand! (The length and number of grains goes to infinity as you get more detailed)

It is less magic, just insanely complicated. We therefore very well might not build it one day. Your claim we would solve it one day is not obvious and needs solid evidence. Some cryptographic problems require millions of years of compute to solve, why cant it be the case that AGI requires petayears of compute? A billion fold increase in compute still won't do it, hence, maybe not ever. 4 billion years and a trillion fold increase in compute might not be enough. (Assuming we have that long. Dawkins was most concerned about humanity surviving the next 500 years.)

trescenzi

GI is in our heads. The A is artificial which means built by humans. They are asking the same question you are.

9rx

> GI is in our heads. The A is artificial which means built by humans.

Humans aren’t built by humans? Where do humans come from, then?

They say the kids aren’t having sex anymore, but I didn’t realize it was because they aren’t aware of the function.

nuancebydefault

Indeed! That's what I have been thinking for a while but I never had the occasion and or breath to write it down, and you explained it concisely. Finally some 'confirmation' 'bias'...

georgestrakhov

IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually.

So we should think about these things in terms of how much agency are we willing to give away in each case and for what gain[1].

Then the ecosystem question that the paper is trying to solve will actually solve itself, because it is already the case today that in many processes agency has been outsourced almost fully and in others - not at all. I posit that this will continue, just expect a big change of ratios and types of actions.

[1] https://essays.georgestrakhov.com/artificial-agency-ladder/

HarHarVeryFunny

An agent, or something that has agency, is just something that takes some action, which could be anything from a thermostat regulating the temperature all the way up to an autonomous entity such as an animal going about it's business.

Hugging Face have their own definitions of a few different types of agent/agentic system here:

https://huggingface.co/docs/smolagents/en/conceptual_guides/...

As related to LLMs, it seems most people are using "agent" to refer to systems that use LLMs to achieve some goal - maybe a fairly narrow business objective/function that can be accomplished by using one or more LLMs as a tool to accomplish various parts of the task.

khafra

> An agent, or something that has agency, is just something that takes some action, which could be anything from a thermostat regulating the temperature all the way up to an autonomous entity such as an animal going about it's business.

I have seen "agency" used in a much more specific way than this: An agent is something that has goals expressed as states of a world, and has an internal model of the world, and takes action to fulfill its goals.

Under this definition, a thermostat is not an agent. A robot vacuum cleaner that follows a list of simple heuristics is also not an agent, but a robot vacuum cleaner with a Simultaneous Location and Mapping algorithm which tries to clean the whole floor with some level of efficiency in its path is an agent.

I think this is a useful definition. It admits a continuum of agency, just like the huggingface link; but it also allows us to distinguish between a kid on a sled, and a rock rolling downhill.

https://www.alignmentforum.org/tag/agent-foundations has some justification and further elaboration.

sgt101

Hi - have a look at this book if you are interested [1] (Mike Wooldridge, Multi-Agent Systems)

[1] https://amzn.eu/d/6a1KgnL

Here are Mike's credentials :https://www.cs.ox.ac.uk/people/michael.wooldridge/

sgt101

Hi - have a look at this book if you are interested [1] (Mike Wooldridge, Multi-Agent Systems)

[1] https://amzn.eu/d/6a1KgnL

Here are Mike's credentials :https://www.cs.ox.ac.uk/people/michael.wooldridge/

w10-1

> IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually

Yes, the term is becoming ambiguous, but that's because it's abstracting out the part of AI that is most important and activating: the ability to work both independently and per intention/need.

Per the paper: "Key characteristics of agents include autonomy, programmability, reactivity, and proactiveness.[...] high degree of autonomy, making decisions and taking actions independently of human intervention."

Yes, "the ecosystem will evolve," but to understand and anticipate the evolution, one needs a notion of fitness, which is based on agency.

> So we should think about these things in terms of how much agency are we willing to give away in each case

It's unclear there can be any "we" deciding. For resource-limited development, the ecosystem will evolve regardless of our preferences or ethics according to economic advantage and capture of value. (Manufacturing went to China against the wishes of most everyone involved.)

More generally, the value is AI is not just replacing work. It's giving more agency to one person, avoiding the cost and messiness of delegation and coordination. It's gaining the same advantages seen where smaller team can be much more effective than a larger one.

Right now people are conflating these autonomy/delegation features with the extension features of AI agents (permitting them to interact with databases or web browsers). The extension vendors will continue to claim agency because it's much more alluring, but the distinction will likely become clear in a year or so.

paulryanrogers

> Manufacturing went to China against the wishes of most everyone involved

Certainly those in China and the executive suites of Western countries wished it, and made it happen. Arguably the western markets wanted it too when they saw the prices dropping and offerings growing.

AI isn't happening in a vacuum. Shareholders and customers are buying it.

rcarmo

I think people keep conflating agency with agents, and that they are actually two entirely different things in real life. Right now agents have no agency - they do dot independently come up with new approaches, they’re mostly task-oriented.

ocean_moist

Maybe I just don’t understand the article but I really have 0 clue how they go about making their conclusions and really don’t understand what they are saying.

I think the 5 issues they provide under “Cognitive Architectures” are severely underspecified to the point where they really don’t _mean_ anything. Because the issues are so underspeficifed I don’t know how their proposed solution solves their proposed problems. If I understand it correctly, they just want agents (Assistants/Agents) with user profiles (Sims) on an app store? I’m pretty sure this already exists on the ChatGPT store. (sims==memories/user profiles, agents==tools/plugins, assistants==chat interface)

This whole thing is so broad and full of academic (pejorative) platitudes that it’s practically meaningless to me. And of course although completely unrelated they through a reference into symbolic systems. Academic theater.

spiderfarmer

This is publishing for the sake of publishing.

sambo546

The general negativity toward agents makes it read like the problem section of a research proposal ("X isn't good enough, we're going to develop solution Y").

spiderfarmer

That’s exactly what I thought.

antisthenes

It's a 4-page paper trying to give a summary of 40+ years of research on AI.

Of course it's going to be vague and presumptuous. It's more of a high-level executive summary for tech-adjacent folks than an actual research paper.

bob1029

I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system. Human (domain expert) in the loop gives you a reasonable chance of recovering from hallucinations before they spiral entirely out of control.

"LLM as UI" seems to be something hanging pretty low on the tree of opportunity. Why spent months struggling with complex admin dashboard layouts and web frameworks when you could wire the underlying CRUD methods directly into LLM prompt callbacks? You could hypothetically make the LLM the exclusive interface for managing your next SaaS product. There are ways to make this just as robust and secure as an old school form punching application.

barrkel

It's quite tedious to have to write (or even say) full sentences to express intent. Imagine driving a car with a voice interface, including accelerator, brake, indicators and so on. Controls are less verbose and dashboards are more information rich than linear text.

It's difficult to be precise. Often it's easier to gauge things by looking at them while giving motor feedback (e.g. turning a dial, pushing a slider) than to say "a little more X" or "a bit less Y".

Language is poorly suited to expressing things in continuous domains, especially when you don't have relevant numbers that you can pick out of your head - size, weight, color etc. Quality-price ratio is a particularly tough one - a hard numeric quantity traded off against something subjective.

Most people can't specify up front what they want. They don't know what they want until they know what's possible, what other people have done, started to realize what getting what they want will entail, and then changed what they want. It's why we have iterative development instead of waterfall.

LLMs are a good start and a tool we can integrate into systems. They're a long, long way short of what we need.

GiorgioG

re: LLM as UI: Given that I don't trust LLMs to be deterministic, I wouldn't trust them to make the correct API call every time I tell it to do X.

kgeist

I think most users have a fixed set of workflows which usually don't change from day to day, so why not just use LLMs as a macro builder with a natural language interface (and which doesn't require you to know the product's UI well beforehand):

- you ask LLM to build a workflow for your problem

- the LLM builds the workflow (macro) using predefined commands

- you review the workflow (can be an intuitive list of commands, understandable by non-specialist) - to weed out hallucinations and misunderstanding

- you save the workflow and can use it without any LLM agents, just clicking a button - pretty determenistic and reliable

Advantages:

- reliable, deterministic

- you don't need to learn a product's UI, you just formulate your problem using natural language

bob1029

> you review the workflow (can be an intuitive list of commands, understandable by non-specialist) - to weed out hallucinations and misunderstanding

This is the idea that is most valuable from my perspective of having tried to extract accurate requirements from the customer. Getting them to learn your product UI and capabilities is an uphill battle if you are in one of the cursed boring domains (banking, insurance, healthcare, etc.).

Even if the customer doesn't get the LLM-defined path to provide their desired final result, you still have their entire conversation history available to review. This seems more likely to succeed in practice than hoping the customer provides accurate requirements up-front in some unconstrained email context.

dingnuts

>- you review the workflow (can be an intuitive list of commands, understandable by non-specialist)

so you define a DSL that the LLM outputs, and that's the real UI

>- you don't need to learn a product's UI, you just formulate your problem using natural language

yes, you do. You have to learn the DSL you just manifested so that you can check it for errors. Once you have the ability to review the LLM's output, you will also have the ability to just write the DSL to get the desired behavior, at which point that will be faster unless it's a significant amount of typing, and even then, you will still need to review the code generated by the LLM, which means you have to learn and understand the DSL. I would much rather learn a GUI than a DSL.

You haven't removed the UI, nor have you made the LLM the UI, in this example. The DSL ("intuitive list of commands.. I guess it'll look like the Robot Framework right? that's what human-readable DSLs tend to look like in practice) is the actual UI.

This is vastly more complicated than having a GUI to perform an action.

shekhargulati

This is the same approach we took when we added LLM capability to a low code tool Appian. LLM helped us generate the Appian workflow configuration file, user reviews it and make changes if required, and then finally publishes it.

nyrikki

So visual programming x.0?

I am pretty sure PLCs with ladder logic are about the limits of the traditional visual/macro model?

Word-sense disambiguation is going to be problematic with the 'don't need to learn' part above.

Consider this sentence:

'I never said she stole my money'

Now read that sentence multiple times, puting emphasis on each word, one at a time and notice how the symantic meaning changes.

LLMs are great at NLP, but we still don't have solutions to those NLU problems that I am aware of.

I think to keep maximum generality without severely restricted use cases that a common DSL would need to be developed.

There will have to be tradeoffs made, specific to particular use cases, even if it is better than Alexa.

But I am thinking about Rice's theorm and what happens when you lose PEM.

Maybe I just am too embedded in an area where these problems are a large part of the difficulty for macro style logic to provide much use.

namaria

You're just describing programming with the extra step of going through a high entropy and low bandwidth channel of natural language and hand waving that problem away.

We can "just" write code as well, as we have been doing for several decades.

hitchstory

I dont either, but this can be mitigated by adding guard rails (strictly validating input), double checking actions with the user and using it for tasks where a mistake isnt world ending.

Even then mistakes can slip through, but it could still be more reliable than a visual UI.

There are lots of horrible web UIs i would LOVE to replace with a conversational LLM agent. No #1 is jira and so is no #2 and #3.

deadbabe

They are deterministic at 0 temperature

lokhura

At zero temp there is still non-determism due to sampling and the fact that floating point addition is not commutative so you will get varying results due to parallelism.

BalinKing

(Disclaimer: I know literally nothing about LLMs.) Wouldn't there still be issues of sensitivity, though? Like, wouldn't you still have to ensure that the wording of your commands stays exactly the same every time? And with models that take less discrete data (e.g. ChatGPT's new "advanced voice model" that works on audio directly), this seems even harder.

wkat4242

They are pretty deterministic then but they are also pretty useless at 0 temperature.

ukuina

Not for the leading LLMs from OpenAI and Anthropic.

vrighter

Not really, not in practice. The order of execution is non-deterministic when running on a cluster or a gpu, or more than one core of the CPU and rounding errors propagate differently on each run.

pwillia7

I had the same epiphany about LLM as UI trying to build a front end for a image enhancer workflow I built with Stable Diffusion. I just about fully built out a Chrome extension and then realized I should just build a 'tool' that llama can interact with and use open webui as the front end.

quick demo: https://youtu.be/2zvbvoRCmrE

diggan

> I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system

That's a funny definition to me, because doing so would mean the LLM is the agent, if you use the classic definition for "user-agent" (as in what browsers are). You're basically inverting that meaning :)

klabb3

> "LLM as UI" seems to be something hanging pretty low on the tree of opportunity.

Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.

> Why spent months struggling with complex admin dashboard layouts

You can throw something together, and even auto generate forms based on an API spec. People don’t do this too often because the UX is insufficient even for many internal/domain expert support applications. But you could and it would be deterministic, unlike an LLM. If the API surface is simple, you can make it manually with html & css quickly.

Overuse of web frameworks has completely different causes than ”I need a functional thing” and thus it cannot be solved with a different layer of tech like LLMs, NFTs or big data.

wkat4242

> Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.

No this is because they use the LLM not only as human interface but also as a reasoning engine for troubleshooting. And give it way less capability than a human agent to boot. So all it can really do is serve FAQs and route to real support.

In this case the fault is not with the LLM but with the people that put it there.

TaurenHunter

"More Agents is all you need" https://arxiv.org/abs/2402.05120

I could not find a "Agents considered harmful" related to AI, but there is this one: "AgentHarm: A benchmark for measuring harmfulness of LLM agents" https://arxiv.org/pdf/2410.09024

This "Agents considered harmful" is not AI-related: https://www.scribd.com/document/361564026/Math-works-09

ksplicer

When reading anthropics blog on agents I basically took away that their advice is you shouldn't use them to solve most problems.

https://www.anthropic.com/research/building-effective-agents

"For many applications, however, optimizing single LLM calls with retrieval and in-context examples is usually enough."

retinaros

True this was also my conclusion in October. Most of the complexity we are building is to fight against the limitations of LLMs. If in some way we could embed all our tools in a single call and have the LLM successfully figure out which tools to call then that would be it and we wouldn’t need any of those frameworks or libraries. But it turns out the reality of agents and tool use is pretty stark and you wouldn’t know that looking at the AI influencer spamming X, Linkedin, Youtube

However The state of agents slightly changed and while we had 25% accuracy in multiturn conversations we re now at 50.

kridsdale1

Morpheus taught me they are quite harmful.

sgt101

Hi - have a look at this book if you are interested [1] (Mike Wooldridge, Multi-Agent Systems)

[1] https://amzn.eu/d/6a1KgnL

Here are Mike's credentials :https://www.cs.ox.ac.uk/people/michael.wooldridge/

dist-epoch

Real agents have never been tried

beezle

For those who dont want to down load the PDF directly and prefer to start with the abstract: https://arxiv.org/abs/2412.16241

danielmarkbruce

Why post this paper? It says nothing, it's a waste of people's time to read.

duxup

Even just the definition of an Agent (maybe imperfect) made it worthwhile for me.

sgt101

Hi - have a look at this book if you are interested [1] (Mike Wooldridge, Multi-Agent Systems)

[1] https://amzn.eu/d/6a1KgnL

Here are Mike's credentials :https://www.cs.ox.ac.uk/people/michael.wooldridge/

danielmarkbruce

I'm not sure it's even good though... the input doesn't need to come from a user. I have an "agent" which listens for an event in financial markets and then goes and does some stuff.

In practice the current usage of "agent" is just: a program which does a task and uses an LLM somewhere to help make a decision as to what to do and maybe uses an LLM to help do it.

jokethrowaway

I don't get the hype about Agents.

It's just calling a LLM n-times with slightly different prompts

Sure, you get the ability to correct previous mistakes, it's basically a custom chain of thought - but errors compound and the results coming from agents have a pretty low success rate.

Bruteforcing your way out of problems can work sometimes (as evinced by the latest o3 benchmarks) but it's expensive and rarely viable for production use.

grahamj

> It's just calling a LLM n-times with slightly different prompts

It can be, but ideally each agent’s model, prompts and tools are tailored to a particular knowledge domain. That way tasks can be broken down into subtasks which are classified and passed to the agents best suited to them.

Agree RE it being bruteforce and expensive but it does look like it can improve some aspects of LLM use.

retinaros

That is just like having a for loop per domain.

mindcrime

> It's just calling a LLM n-times with slightly different prompts

That's one way of building something you could call an "agent". It's far from the only way. It's certainly possible to build agents where the LLM plays a very small role, or even one that uses no LLM at all.

retinaros

Thats a workflow

null

[deleted]

pwillia7

How would the SIMS that contain the user prefs and whatnot not have the same issues described in the paper as the agents themselves?

nowittyusername

With time, they will get a lot better. IMO, the biggest hurdles the agents currently lack is good implementation of function calling capabilities. LLM's should be used as reasoning engines and everything else should be offloaded to tool use. This will drastically reduce hallucinations and errors in math and all the other areas.

lionkor

Do they reason, though?

ripped_britches

I can imagine really powerful agents this year or next in theory. Agents meaning (not a thermostat) a system that can go complete some async tasks on your behalf. But in practice I don’t have any idea how we will solve for prompt injection attacks. Hopefully someone cracks it.

Jerrrry

  >solve for prompt injection attacks
It is essentially the same Code as Data problem as always.

cratermoon

"AI will soon be able too..."