Building a Deep Research Agent Using MCP-Agent
16 comments
·September 10, 2025diggan
saqadri
Haha, I didn't have control on the blog website, just the content. The readme and code is the ultimate source of truth (and easier to read):https://github.com/lastmile-ai/mcp-agent/blob/main/src/mcp_a...
So the core idea is the Deep Orchestrator is pretty unopinionated on what to use for searching, as long as it is exposed over MCP. I tried with a basic fetch server that's one of the reference MCP servers (with a single tool called `fetch`), and also tried with Brave.
I think the folks at Jina wrote some really good stuff on the actual search part: https://jina.ai/news/a-practical-guide-to-implementing-deeps... -- and how to do page/url ranking over the course of the flow. My recommendation would be to do all that in an MCP server itself. That keeps the "deep orchestrator" architecture fairly clean, and you can plug in increasingly sophisticated search techniques over time.
Zetaphor
Self host an instance of SearXNG[1] either locally or on a remote server with a simple docker container and use its JSON API [2]. You have to enable the JSON API in the config manually [3].
[1] https://docs.searxng.org/admin/installation-docker.html#inst...
ilovefood
Great write-up! Gives me a few ideas for a governance bot that I'm working on. Thanks for sharing :)
asail77
A good model for planner seems pretty important, what models are best?
saqadri
OP here -- I think the general principle I would recommend is using a big reasoning model for the planning phase. I think Claude Code and other agents do the same. The reason this is important is because the quality of the plan really affects the final result, and error rates will compound if the plan isn't good.
haniehz
based on the article, it seems like a good reasoning model like gpt5 or opus 4.1 might be good choices for the planner. I wonder if the gpt oss reasoning models would do well
diggan
Personally been using GPT-OSS-120b locally with reasoning_effort set to `high` and it blows pretty much every other local model out of the water, but takes a lot of time for it to eventually do a proper content reply. But for fire-and-forget jobs like "Create a well-researched report on X from perspective Y" it works really well.
cyberninja15
what machine are you running GPT-OSS-120B on? I'm currently only able to get GPT-OSS-20B working on my macbook using Ollama
koakuma-chan
Gemini 2.5 Pro is also a great reasoning model, I still prefer it over GPT 5
luckydata
Gemini is great, it's just incredibly clumsy at tool use and that's why it fails so often in practice. I'm looking forward to the next version, it will for sure address it, it's a big issue internally too (I'm a recent xoogler).
I gotta say, having white blurry blobs of something in the background floating behind white/grey text maybe wasn't the best design-choice out there.
None the less, I tried to find the actual APIs/service/software used for the "search" part, as I've found that to be the hardest to actually get right (at least for as-local-as-possible usage) for my own "Deep Research Agent".
I've experimented with Brave's search API which worked OK, but seems pricey for agent usage. Currently experimenting with using my own (local) YaCy instance right now, which actually gives me higher quality artifacts at the end, as there are no rate-limits and the model can do hundreds of search calls without me worrying about the cost. But it isn't very quick at picking up some stuff like news and more, otherwise works OK too.
What is the author doing here for the actual searching? Anyone else have any other ideas/approaches to this?