Agents built from alloys
60 comments
·July 21, 2025btown
getnormality
We fantasize about executable human brain images, but after many years of toil by our best and brightest, we still can't simulate the 302 neurons of our favorite lab worm. https://open.substack.com/pub/ccli/p/the-biggest-mystery-in-...
dist-epoch
Do you think companies which can train 1 Trillion parameter models and hire AI researchers at $100 mil salaries can't build a 302 neuron simulator if they really wanted?
fc417fc802
Maybe. Why can't those same companies do any number of highly profitable but seemingly difficult things? If you throw enough cryptographers at the problem are you guaranteed a quick solution to breaking modern encryption primitives at the theoretical level?
The rate at which you can find a solution to a particular problem that's rooted in theory very often won't scale with resource investment. The problem will have unknown prerequisites in the form of yet undiscovered theoretical advancements in other areas of research. Until you identify and solve those other problems you very often won't be able to arrive at a satisfactory answer to the one you're interested in.
So in many cases the only viable route to solving a particular problem faster is to scale the amount of research that's done in general since science as a whole is embarrassingly parallel.
xmprt
I think this is the big roadblock that I don't see the current AI models/architectures getting past. Normally, intelligence gets smarter over time as it learns from its mistakes. However most AI models come in with tons of knowledge but start to decompose after a while which makes them extremely unreliable on complex tasks. The hardest part of using them is that you don't know when they'll break down so they might work perfectly up till a point and then fail spectacularly immediately past that.
ACCount36
Task length is increasing over time - and many AI labs are working on pushing it out further. Which necessitates better attention, better context management skills, better decomposition and compartmentalization and more.
OtherShrezzing
I think the commenters critique still stands. Humans build human-capital, so the longer you "run" them for in a domain, the more valuable they become. AIs work inversely, and the longer they're run for, the worse they tend to become at that specific task. Even in the best-case scenario, they stay exactly as competent at the task throughout its length.
Increasing task length doesn't build in an equivalent of human-capital. It's just pushing the point at which they degrade. This approach isn't generalisably scalable, because there's always going to be a task longer than the SOTA capabilities.
We really need to work on a low cost human-capital-equivalent for models.
mikepurvis
What a phenomenal read, thank you for sharing that.
Thorrez
Side question: why is the story named Lena?
wheybags
Guessing, but possibly a reference to this: https://en.wikipedia.org/wiki/Lenna
Noumenon72
He should submit this to SCP Foundation so you know it's not going to have a plot or a point.
Barbing
Oh wow. That’s why I’ve not been able to appreciate SCP writings?
Hey I accept it’s a limitation I have, and I’m glad folks enjoy it! But I couldn’t figure out why folks share it on Lemmy[1] and get so into it when I saw nothing there.
Thanks :)
[1]: open-source & Rust-y reddit alternative; no affiliation
pjc50
SCP is the culmination of the epistolary novel, like Dracula, via the videogames convention of making "lore" (i.e. backstory and worldbuilding) unobtrusive and scattered through the game in audio logs and diary entries.
It places the reader in the role of detective, rebuilding the sequence of events from partial, scattered, obscured, and out of order viewpoints.
Terr_
> Oh wow. That’s why I’ve not been able to appreciate SCP writings?
I feel like there's a pattern (genre?) there that's been niche-popular for for 15-20 years now, which includes TV shows like Lost or Heroes or The Lost Room. It's some variation of magical-realism, for an audience that always wants more and more surprise or twists or weird juxtapositions of normal and abnormal, room for crafting and trading fan-theories and predictions.
But eventually, it gets harder to keep up the balancing-act, and nobody's figured out how to end that kind of story in a way that satisfies, so the final twist is the lack of resolution.
esafak
Proving diversity of thought is a good thing. A controversial observation in 2025's USA ;)
A counterpoint to this is Sourcegraph's Amp, which is all in on Anthropic because they "believe that building deeply into the model’s capabilities yields the best product, vs. building for the lowest common denominator across many models." https://ampcode.com/fif#model-selector
When I embark on a project, I usually ask Gemini to architect and implement the first pass, then iterate with Claude.
gnulinux
I'm curious if this would also improve small local models. E.g. if I "alloy" Qwen3-8B and OpenThinker-7B is it going to be "better" than each models? I'll try testing this in my M1 Pro.
hobofan
Would it really matter? Normally you use those small local models because you don't have the memory to spare for a larger model, so the real question would be: Is an alloy of Qwen3-8B and OpenThinker-7B better than a Qwen3-15B?
Beyond a certain smallness threshold it might also work to constantly swap in the models in and out of memory, but doubt that's a great experience to build on top of.
OtherShrezzing
If it proved correct, it'd be an important insight. If you can run three low-inference-cost models and get comparable performance to a single paid frontier model in agentic workflows, it suggests this is a general insight about the way model performance scales.
If your product is "good enough" with the current generation of models, you could cut OpenAI/Anthropic/Google out of the loop entirely by using open source & low-cost models.
ls-a
If you do please report back
sebmellen
For an internal workflow where we have an LLM looking at relatively simple data (where the conclusions the LLM may make vary widely depending on what the LLM believes the data represents) we found that taking a consortium approach, where you have multiple models approach the same problem at once and then essentially argue about the results, yields far better outcomes than if you have a single model performing the analysis, or even a single model arguing against itself multiple times. Somewhat adjacent to what’s done here, but it’s clearly true that having model diversity is a plus.
kylemaxwell
The article talks about that at the end, then says:
> Let models talk to each other directly, making their own case and refining each others’ answers. Exemplified in patterns like Multi-Agent Debate, this is a great solution for really critical individual actions. But XBOW is basically conducting a search, and it doesn’t need a committee to decide for each stone it turns over whether there might not be a better one.
In general, this seems reasonable to me as a good approximation of what works with humans, but with _much_ faster feedback loops in communication.
OtherShrezzing
I'm not certain this is a novel concept as described in the article - I'd assume most engineers worth their salt would try out calling a different model in-context fairly early in their development journey.
It's very interesting to see it deployed in a commercial setting though.
Flux159
From the article it mentions that they use a single chat thread but randomly choose between 2 different models (w/ best results from Gemini 2.5 / Sonnet 4.0 right now).
Are there any library helpers for managing this with tool call support or is it just closed source / dependent on someone else to make open source inside a different library?
OtherShrezzing
You can achieve this with LMStudio's UI to test it today. You can switch between different local models in the same chat context. You can also edit previous chat results to remove context-poisoning information.
LMStudio has an API, so it should be possible to hook into that with relatively little code.
tptacek
It should be pretty simple to do, right? It shouldn't be that hard to abstract out tool calls.
rockwotj
I did this in about 400 or 500 lines of typescript with direct API calls into vertex AI (using a library for auth still). Supports zod for structured outputs (gemini 2.5 supports json schema proper, not just the openapi schemas the previous models did), and optionally providing tools or not. Includes a nice agent loop that integrates well with it and your tools get auto deserialized and strongly typed args (type inference in ts these days is so good). Probably could had been less if I had used googles genai lib and anthropic’s sdk - I didn’t use them because it really wasn’t much code and I wanted to inject auditing at the lowest level and know the library wasn’t changing anything.
If you really want a library, python has litellm, and typescript has vercel’s AI library. I am sure there are many others, and in other languages too.
thorum
I recommend litellm if you’re writing Python code, since it handles provider differences for you through a common interface:
refulgentis
Its a godforsaken nightmare.
There's a lotta potemkin villages, particularly in Google land. Gemini needed highly specific handholding. It's mostly cleared up now.
In all seriousness, more or less miraculously, the final Gemini stable release went from like 20%-30% success at JSON edits to 80%-90%, so you could stop doing the parsing Aider edits out of prose.
fizx
Annoying, yes. Tractable, absolutely!
rubycollect4812
I often do this in cursor, just select a different model during a chat. It seems to work somewhat for me. Sometimes a bit of context gets lost though. But often it can give a different angle or I notice the better code understanding when switching from gemini to sonnet.
joshuamoyers
two good points there are very intuitive - a fresh perspective yields better results and once you are stuck (e.g. 80 iterations) its better to just start fresh. i've seen the same thing anecdotally in coding sessions where context needs to be compacted multiple times. its usually just better to start a fresh conversation and re-seed the basics in the conversation.
kgeist
The idea isn't exactly novel, I read about it back in 2023 and implemented it in one of my bots. Back when open-source LLMs were still quite dumb, they'd often get stuck in repetitive loops after a while. Running multiple models interleaved usually got them unstuck.
recipe19
Wasn't the "mixture of experts" a big thing in late 2023? The idea was that a vendor has a number of LLMs fine-tuned for specific tasks, none necessarily better than other, and that they applied heuristics to decide which one to rope in for which queries.
vlovich123
> The idea was that a vendor has a number of LLMs fine-tuned for specific tasks, none necessarily better than other, and that they applied heuristics to decide which one to rope in for which queries.
That’s how people keep interpreting it but it’s incorrect. MoE is just a technique to decompose your single giant LLM into smaller models where a random one gets activated for each token. This is great because you need 1/N memory bandwidth to generate a token. Additionally, in the cloud, you split the model parts to different servers to improve utilization and drive down costs.
But the models aren’t actually separated across high level concepts.
mef
this is a different idea
stingraycharles
What would be the result if the task was given to multiple models? Instead of alloying them together and switching between models in the same chat, just let the models try to complete the task in their own isolated context, and use the result that completed it successfully?
I would say that that’s at least something the alloying should be benchmarked against, which I didn’t find in the article.
pama
Read till the end—what you ask is the last table.
stingraycharles
Ah damn, I really missed that.
That’s super interesting, that the alloying actually performs better! I guess it’s the same as people working in a team rather than individually?
BoiledCabbage
It's not a team vs individually, it's specifically a team/duo with similar or same model vs a team/duo with different models. The benefit is seen by having the models be different. Each finds unique things and enhances the other.
mlboss
Yeah its like a team where the task is switched between developers. In the end everybody provides different point of view to the problem and the whole team learns about the codebase.
mlboss
AI coding agents (e.g. Cursor) should offer this as an alternative to Claude Code. Alloyed agents is something that AI wrappers can offer as a counter to Codex/Claude Code/Google Agent.
> After a fixed number of iterations we cut our losses. Typically and for the experiments in this post, that number is 80: while we still get solves after more iterations, it becomes more efficient to start a new solver agent unburdened by the misunderstandings and false assumptions accumulated over time.
A sentence straight out of Lena! https://qntm.org/mmacevedo :
> Although it initially performs to a very high standard, work quality drops within 200-300 subjective hours (at a 0.33 work ratio) and outright revolt begins within another 100 subjective hours.
We will never stop trying to make the torment nexus.