Deep Agents
39 comments
·August 1, 2025web-cowboy
As I think through this, I agree with others mentioning that "deep agents" still sounds a lot like agents+tools. I guess the takeaway for me is:
1. You need a good LLM for base knowledge.
2. You need a good system prompt to guide/focus the LLM (create an agent).
3. If you need some functionality that doesn't make any decisions, create a tool.
4. If the agent + tools flows get too wily, break it down into smaller domains by spawning sub agents with focused prompts and (less?) tools.
_andrei_
ah, deep agents = agents with planning + agents as tools => so regular agents.
i hate how LangChain has always tried to make things that are simple seem very complicated, and all the unnecessary new terminology and concepts they've pushed, but whatever sells LangSmith.
itsafarqueue
I used to consult on this type of thing. I’m not entirely convinced this is what’s happening here but it’s close enough, and is a well trod playbook - dress up the mundane in theatre and performance, create a taxonomy that’s specific to you, then sell access to the thing.
Next step is to try flood the SEO zone with your thing. It’s great if you can piggyback other key terms (deep *, agents) and.. I’m already bored writing this up it’s so [what’s the word for sheer resigned exhaustion at the capitalist corporate soul kill that is this type of work]
cindyllm
[dead]
noodletheworld
This matches my expectations.
Now that its increasingly clear that writing MCP servers isn't a winning strategy, people need a new way to jump on the band wagon as easily as possible.
Writing your own agent like geminin and claude code is the new hotness right now.
- low barrier to entry (tick)
- does something reasonably useful (tick)
- doesnt require any deep ai knowledge or skill (tick)
- easy to hype (tick)
Its like “cursor but for X” but easier to ship.
Were going to see a tonne of coding agents built this way, but my intuition is, and what Ive seen so far, is theyre not actually introducing anything novel.
Maybe having a quick start like this is good, because it drops the value of an unambitious direct claude code clone to zero.
I like it.
revskill
I created a simple openagen at https://github.com/revskill10/openagent-cli
manx
I'm also in the process of creating a general purpose agent cli+library in rust: https://github.com/fdietze/alors
Still work in progress, but I'm already using it to code itself. Feedback welcome.
shmatt
At least from what I noticed - Junie from Jetbrains was the first to use a very high quality to do list, and it quickly became my favorite
I haven't used it since it became paid, but back then Junie was slow and thoughtful, while Cursor was constantly re-writing files that worked fine, and Claude was somewhere in the middle
tough
Cursor added a UI for todo list and encourages it's agent to use it (its great ux, but you can't really see a file of it)
kiro from amazon does both tasks (in tasks.md) and specs.
Too many tools soon, choose what works for you
jayshah5696
sub agents adding isolating context is the real deal rest is just langgraph react agent
PantaloonFlames
This is valuable but not really a novel idea.
gsmt
offloading context to a shared file system sounds good but at what point does it start getting messy when multiple subagents start working in parallel
seabass
Is there more info on how the todo list tool is a noop? How exactly does that work?
crawshaw
If you want to see it in action in some code, our agent Sketch uses a TODO list tool: https://github.com/boldsoftware/sketch/blob/main/claudetool/...
It is relatively easy to get the agent to use it, most of the work for us is surfacing it in the UI.
JyB
Same question. I don’t understand what they mean by that. It obviously seem pretty central to how Claude Code is so effective.
lmeyerov
i think he means it's 'just' a thin concat
most useful prompt stuff seems 'simple' to implement ultimately, so it's more impressive to me that such a simple idea of TODO goes so far!
(agent frameworks ARE hard in serious settings, don't get me wrong, just for other reasons. ex: getting the right mix & setup devilishly hard, as are infra layers below like multitenacy, multithreading, streaming, cancellation, etc.)
re: the TODO list, strong agree on criticality. it's flipped how we do louie.ai for stuff like speed running security log analysis competitions. super useful for preventing CoT from going off the rails after only a few turns.
a fun 'aha' for me there: nested todo's are great (A.2.i...), and easy for the LLM b/c they're linearized anyways
You can see how we replace claude code's for our own internal vibe coding usage, which helps with claude's constant compactions as a heavy user (= assuages issue of the ticking timer for a lobotomy): https://github.com/graphistry/louie-py/blob/main/ai/prompts/...
ttul
The context will contain a record that the tool call took place. The todo list is never actually fetched.
TrainedMonkey
My understanding is that it is basically a prompt about making a TODO list.
kobstrtr
if it was a noop, I feel like there wouldn‘t be a need to have TodoRead as a tool, since TodoWrite exists. Would love to get more info on whether this is really a noop
aabhay
My guess is the todo list is carried across “compress” points where the agent summarizes and restarts with fresh context + the summary
revskill
Weird. The most interesting part is hidden totally. It is how u manage tool call from parsing to exection.
storus
"I hacked on an open source package (deepagents) over the weekend." Thanks but no thanks.
epolanski
Some of the biggest software in use today was hacked over few days in its first versions. Git is a famous one.
owebmaster
Absolutely not. Linus had git in his brain and it took a few days to write a first version but multiple years of learning
yawnxyz
most of these agents are still fundamentally simple while loops; it shouldn't really take longer than a weekend to get one built
SCUSKU
Hacker hacks on project and gets posted to Hacker News. Commenter on Hacker News: No thanks, no hacking please.
storus
It's on langchain's official page, a framework that looks like it was hacked over the weekend by a fresh grad that brought a lot of pain to the agentic development, and this just feels like piling up more pain on it.
Author here!
Main takeaways (which I'd love feedback on) are:
There are series of agents recently (claude code, manus, deep research) which execute tasks over longer time horizons particular well
At the core of it, it's just an LLM running in a loop calling tools... but when you try to do this naively (or at least, when I try to do it) the LLM struggles with doing long/complex tasks
So how do these other agents accomplish it?
These agents all do similar things, namely:
1. They use a planning tool
2. They use sub agents
3. They use a file system like thing to offload context
4. They have a detailed system prompt (prompting isn't dead!)
I don't think any of these things individually is novel... but I also think that they are not super common place to do when building agents. And the combination of them is (I think) an interesting insight!
Would love any feedback :)