Show HN: Terminal-Bench-RL: Training Long-Horizon Terminal Agents with RL
10 comments
·July 29, 2025tjungblut
If you are curios, like me, how the actual reinforcement learning happens. It uses verl [1] underneath. The paper "HybridFlow: A Flexible and Efficient RLHF Framework" [2] explains it really well.
[1] https://github.com/volcengine/verl [2] https://arxiv.org/abs/2409.19256v2
OtherShrezzing
That you've spent in the low-thousands (by the looks of it), and managed to beat GPT4.1 is an amazing insight into the moat of the big AI labs.
rboyd
Great work! There should be a way for entities to crowdfund model training. Can a model like this be partially evaluated during training time and save through early stopping?
What are the best papers/resources on sota long-horizon RL?
Thanks.
anorwell
Some of the comments so far seem to be misunderstanding this submission. As I understand it:
1. Custom scaffolding (system prompt and tools) using Qwen3-32B achieved 13.75% on Terminal-Bench. No training was involved. 2. The author has built an RL system, but it has not been used for anything due to cost limitations.
So there's actually no result related to training here. It well known that the scaffolding used can have a large impact on benchmark outcomes (the Terminal bench leaderboard also demonstrates this [1]).
esafak
It looks like the submission has two aspects that are being conflated.
1. Tooling for training a terminal agent.
2. An agent that was _not_ trained with this tooling but prompt engineered. I could not find the author's discussion on this point.
enigma101
Did you consider a kickstarter to overcome the gpu poorness??? 30 to 50 should be doable
bravesoul2
Wow amazing! Amazing a "one person band" can do this much. It crosses many skillets.
thomasfromcdnjs
How much did you spend?
erdaltoprak
This is incredible work
null
After training calculator agent via RL, I really wanted to go bigger! So I built RL infrastructure for training long-horizon terminal/coding agents that scales from 2x A100s to 32x H100s (~$1M worth of compute!) Without any training, my 32B agent hit #19 on Terminal-Bench leaderboard, beating Stanford's Terminus-Qwen3-235B-A22! With training... well, too expensive, but I bet the results would be good!
*What I did*:
- Created a Claude Code-inspired agent (system msg + tools)
- Built Docker-isolated GRPO training where each rollout gets its own container
- Developed a multi-agent synthetic data pipeline to generate & validate training data with Opus-4
- Implemented a hybrid reward signal of unit test verifiers & a behavioural LLM judge.
*Key results*:
- My untrained Qwen3-32B agent achieved 13.75% on Terminal-Bench (#19, beats Stanford's Qwen3-235B MoE)
- I tested training to work stably on 32x H100s distributed across 4 bare metal nodes
- I created a mini-eval framework for LLM-judge performance. Sonnet-4 won.
- ~£30-50k needed for full training run of 1000 epochs (I could only afford testing )
*Technical details*:
- The synthetic dataset ranges from easy to extremely hard tasks. An example hard task's prompt:
"I found this mystery program at `/app/program` and I'm completely stumped. It's a stripped binary, so I have no idea what it does or how to run it properly. The program seems to expect some specific input and then produces an output, but I can't figure out what kind of input it needs. Could you help me figure out what this program requires?"
- Simple config presets allow training to run on multiple hardware setups with minimal effort.
- GRPO used with 16 rollouts per task, up to 32k tokens per rollout.
- Agent uses XML/YAML format to structure tool calls
*More details*:
My Github repos open source it all (agent, data, code) and has way more technical details if you are interested!:
- Terminal Agent RL repo
- Multi-agent synthetic data pipeline repo
I thought I would share this because I believe long-horizon RL is going to change everybody's lives, and so I feel it is important (and super fun!) for us all to share knowledge around this area, and also have enjoy exploring what is possible.
Thanks for reading!
Dan
(Built using rLLM RL framework which was brilliant to work with, and evaluated and inspired by the great Terminal Bench benchmark)