Skip to content(if available)orjump to list(if available)

Crush: Glamourous AI coding agent for your favourite terminal

jsnell

I find it strange how most of these terminal-based AI coding agents have ended up with these attempts at making text UIs flashy. Tons of whitespace, line art, widgets, ascii art and gradients, and now apparently animations. And then what you don't get is the full suite of expected keybindings, tab completion, consistent scrollback, or even flicker-free text rendering. (At least this one seems to not be written with node.js, so I guess there's some chance that the terminal output is optimized to minimize large redraws?).

So they just don't tend to work at all like you'd expect a REPL or a CLI to work, despite having exactly the same interaction model of executing command prompts. But they also don't feel at all like fullscreen Unix TUIs normally would, whether we're talking editors or reader programs (mail readers, news readers, browsers).

Is this just all the new entrants copying Claude Code, or did this trend get started even earlier than that? (This is one of the reasons Aider is still my go-to; it looks and feels the way a REPL is supposed to.)

citizenpaul

Well this specific tool is by a company called Charm that has the mission statment of making the command line glamerous. They have been around for several years prior to the LLM craze.

They make a CLI framework for golang along with tools for it.

reactordev

That’s right. Charm has been making pretty tuis since the beginning of the group. BubbleTea and VHS are amazing. Everyone should try them.

citizenpaul

Also their website features a public ssh CLI interface that is a cool demo of what the framework can do. Go hug it to death HN!

stavros

Oooh yes, VHS is amazing.

thinkxl

I came to reply this. They have been building very cool CLI projects and those projects end up composing new bigger projects. This is their last one (That I know of) which use most of the other projects they created before.

They didn't do it flashy for this project specifically (like Claude Code, which I don't think is flashy at all) but every single one of their other projects are like this.

breuleux

What bothers me is that what I like about terminals is the scrolling workflow of writing commands and seeing my actions and outputs from various sources and programs sequentially in a log. So what I want is a rich full-HTML multi-program scrolling workflow. Instead, people are combining the worst of both worlds. What are they doing? Give me superior UI in a superior rendering system, not inferior UI in an inferior rendering system, god damn it.

dedpool

You can run it inside the terminal while still using your code editor with full support for diffs and undo. It works seamlessly with IDE like Cursor AI or VSCode, allowing multiple agents to work on different tasks at the same time, such as backend and frontend. The agents can also read each other’s rules, including Cursor rules and Crush Markdown files.

mccoyb

Say more about what you mean by "multi-program scrolling workflow", if you don't mind

teraflop

I think what they mean by "multi-program scrolling workflow" is just what you ordinarily get in a terminal window. You run command A, and below it you see its output. You run command B, and below it you see its output. And you can easily use the scroll bar to look at earlier commands and their output.

The parent commenter seems to be asking for the same thing, but with rich text/media.

breuleux

I mean a session that isn't limited to interaction with a single program. For example, if I have an agent in this paradigm, I want to easily interleave prompting with simple commands like `ls`, all in the same history. That's not what I'm getting with apps like claude code or crush. They just take over the entire terminal, and crush even quits without leaving a trace.

__jonas

Nah, this type of text UI has been charmbracelet's whole thing since before AI agents appeared.

I quite like them, unlike traditional TUIs, the keybindings are actually intuitively discoverable, which is nice.

Arubis

I suspect some of it is that these interfaces are rapidly gaining adherents (and developers!) whose preference and accustomed usage is more graphically IDE-ish editors. Not everyone lives their life in a terminal window, even amongst devs. (Or so I’m told; I still have days where I don’t bother starting X/Wayland)

kgwgk

At least one can use Claude Code within emacs: https://github.com/stevemolitor/claude-code.el

umanwizard

You can also just run it in vterm

codemonkey-zeta

That package _does_ just run it in vterm, and it just adds automatic code links (the @path/to/file syntax), and a few more conveniences.

segmondy

you are showing how young you are. ;-) I'm glad this is back as someone that grew up in the BBS era, colorful text based stuff brings back joyful memories. I'm building my own terminal CLI coding agent. My plan is to make it this colorful with ascii art when I'm done, I'm focused on features now.

smokel

They are easier to make than full-fledged user interfaces, so you get to see more of them.

drdaeman

Well, they all seem to have issues with multi-line selection, as those get all messed up with decorations, panes and whatever noise is there. To best of my awareness, the best a TUI can do is to implement its own selection (so, alt-screen, mouse tracking, etc. - plenty of stuff to handle, including all the compatibility quirks) and use OSC 52 for clipboard operations, but that loses the native look-and-feel and terminal configuration.

(Technically, WezTerm's semantic zones should be the way to solve this for good - but that's WezTerm-only, I don't think any other terminal supports those.)

On the other hand, with GUIs this is not an issue at all. And YMMV, but for me copying snippets, bits of responses and commands is a very frequent operation for any coding agent, TUI, GUI or CLI.

knoopx

this is debatable, a proper TUI has the same complexities as conventional UIs + legacy rendering.

wonger_

Flashy TUIs have been around for a few years. Check out the galleries for TUI frameworks:

https://ratatui.rs/showcase/apps/

https://github.com/charmbracelet/bubbletea/tree/main/example...

https://textual.textualize.io/

I've been drafting a blog post about their pros and cons. You're right, text input doesn't feel like a true REPL, probably because they're not using readline. And we see more borders and whitespace because people can afford the screen space.

But there's perks like mouse support, discoverable commands, and color cues. Also, would you prefer someone make a mediocre GUI or a mediocre GUI for your workflows?

tptacek

One nice thing about this is that it's early days for this, and the code is really clear and schematic, so if you ever wanted a blueprint for how to lay out an agent with tool calls and sessions and automatic summarization and persistence, save this commit link.

chrisweekly

Thanks for the tip! I trust your judgement so this repo just got more interesting for me.

tekacs

For anyone else who wants to actually be able to _read_ what's happening in the demo GIF, I slowed it down in ffmpeg and converted it to video form:

https://share.cleanshot.com/XBXQbSPP

Aperocky

The big question - which one of these new agents can consume local models to a reasonable degree? I would like to ditch the dependency on external APIs - willing to trade some performance in lieu.

jasonm23

Crush has an open issue (2 weeks) to add Ollama support - it's in progress.

ggerganov

They should add "custom endpoint" support instead [0].

[0] https://github.com/microsoft/vscode/issues/249605

amdivia

FYI it works already even without this feature branch (you'll just have to add your provider and models manually)

``` { "providers": { "ollama": { "type": "openai", "base_url": "http://localhost:11434/v1", "api_key": "ollama", "models": [ { "id": "llama3.2:3b", "model": "Llama 3.2 3B", "context_window": 131072, "default_max_tokens": 4096, "cost_per_1m_in": 0, "cost_per_1m_out": 0 } ] } } } ```

segmondy

why?

it's basic, edit the config file. I just downloaded it, ~/.cache/share/crush/providers.json add your own or edit an existing one

Edit api_endpoint, done.

Aperocky

nice, that would be my reason to use Crush.

tempodox

Me too.

0x457

Most of these agents work with any OpenAI compatible endpoints.

oceanplexian

Actually not really.

I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.

Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.

Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.

spmurrayzzz

> I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support

Almost from day one of the project, I've been able to use local models. Llama.cpp worked out of the box with zero issues, same with vllm and sglang. The only tweak I had to make initially was manually changing the system prompt in my fork, but now you can do that via their custom modes features.

The ollama support issues are specific to that implementation.

simonw

I still haven't seen any local models served by Ollama handle tool calls well via that OpenAI endpoint. Have you had any success there?

bachittle

LM Studio is probably better in this regard. I was able to get LM studio to work with Cursor, a product known to specifically avoid giving support to local models. The only requirement is if it uses servers as a middle-man, which is what Cursor does, you need to port forward.

steveharman

Just use Claude Router? Supports Ollama and most others.

https://github.com/musistudio/claude-code-router

jmj

OpenHands let's you set any LLM you want. https://github.com/All-Hands-AI/OpenHands

elliotec

Aider says they do, but I haven’t tried it.

https://aider.chat/docs/llms.html

phaedrix

Aider has built in support for lm studio endpoints.

https://aider.chat/docs/llms/lm-studio.html

null

[deleted]

sharperguy

What happens if you just point it at its own source and ask it to add the feature?

segmondy

it will add the feature, I saw openAI make that claim that developers are adding their own features, saw Anthrophic make the same claim, and Aider's paul often says Aider wrote most of the code. I started building my own coding CLI for the fun of it, and then I thought, why not have it start developing features, and it does too. It's as good as the model. For ish and giggles, I just downloaded crush, pointed it to a local qwen3-30b-a3b which is a very small model and had it load the code, refactor itself and point bugs. I have never used LSP, and just wanted to see how it performs compared to treesitter.

segmondy

all of them, you can even use claude-code with a local model

navanchauhan

sst/opencode

metadat

But only a few models can actually execute commands effectively.. what is it, Claude and Gemini? Did I miss any?

rekram1-node

kiwi k2, Qwen3-Coder

beanjuiceII

has a ton of bugs

tough

works quite reliably with claude-code-router and any agentic SOTA model (not necessarily anthropic)

cristea

I would love a comparison between all these new tools, like this with Claude Code, opencode, aider and cortex.

I just can’t get an easy overview of how each tool works and is different

riotnrrd

One of the difficulties -- and one that is currently a big problem in LLM research -- is that comparisons with or evaluations of commercial models are very expensive. I co-wrote a paper recently and we spent more than $10,000 on various SOTA commercial models in order to evaluate our research. We could easily (an cheaply) show that we were much better than open-weight models, but we knew that reviewers would ding us if we didn't compare to "the best."

Even aside from the expense (which penalizes universities and smaller labs), I feel it's a bad idea to require academic research to compare itself to opaque commercial offerings. We have very little detail on what's really happening when OpenAI for example does inference. And their technology stack and model can change at any time, and users won't know unless they carefully re-benchmark ($$$) every time you use the model. I feel that academic journals should discourage comparisons to commercial models, unless we have very precise information about the architecture, engineering stack, and training data they use.

tough

you have the separate the model , from the interface, imho.

you can totally evaluate these as GUI's, and CLI's and TUI's with more or less features and connectors.

Model quality is about benchmarks.

aider is great at showing benchmarks for their users

gemini-cli now tells you % of correct tools ending a session

imjonse

This used to be opencode but was renamed after some fallout between the devs I think.

moozilla

If anyone is curious on the context:

https://x.com/thdxr/status/1933561254481666466 https://x.com/meowgorithm/status/1933593074820891062 https://www.youtube.com/watch?v=qCJBbVJ_wP0

Gemini summary of the above:

- Kujtim Hoxha creates a project named TermAI using open-source libraries from the company Charm.

- Two other developers, Dax (a well-known internet personality and developer) and Adam (a developer and co-founder of Chef, known for his work on open-source and developer tools), join the project.

- They rebrand it to OpenCode, with Dax buying the domain and both heavily promoting it and improving the UI/UX.

- The project rapidly gains popularity and GitHub stars, largely due to Dax and Adam's influence and contributions.

- Charm, the company behind the original libraries, offers Kujtim a full-time role to continue working on the project, effectively acqui-hiring him.

- Kujtim accepts the offer. As the original owner of the GitHub repository, he moves the project and its stars to Charm's organization. Dax and Adam object, not wanting the community project to be owned by a VC-backed company.

- Allegations surface that Charm rewrote git history to remove Dax's commits, banned Adam from the repo, and deleted comments that were critical of the move.

- Dax and Adam, who own the opencode.ai domain and claim ownership of the brand they created, fork the original repo and launch their own version under the OpenCode name.

- For a time, two competing projects named OpenCode exist, causing significant community confusion.

- Following the public backlash, Charm eventually renames its version to Crush, ceding the OpenCode name to the project now maintained by Dax and Adam.

canadaduane

This is like game of thrones, dev edition. Thanks for the background.

/me up and continues search for good people and good projects.

beanjuiceII

yea two of the devs did a crazy rug pull

tough

lmao weirdest stuff ever on X and it seems like nobody cares anymore?

paradite

The performance not only depends on the tool, it also depends on the model, and the codebase you are working on (context), and the task given (prompt).

And all these factors are not independent. Some combinations work better than others. For example:

- Claude Sonnet 4 might work well with feature implementation, on backend code python code using Claude Code.

- Gemini 2.5 Pro works better for big fixes on frontend react codebases.

...

So you can't just test the tools alone and keep everything else constant. Instead you get a combinatorial explosion of tool * model * context * prompt to test.

16x Eval can tackle parts of the problem, but it doesn't cover factors like tools yet.

https://eval.16x.engineer/

alixanderwang

Played around with it for a serious task for 15 mins. Compared to Claude Code:

Pros:

- Beautiful UI

- Useful sidebar, keep track of changed files, cost

- Better UX for accepting changes (has hotkeys, shows nicer diff)

Cons:

- Can't combine models. Claude Code using a combination of Haiku for menial search stuff and Sonnet for thinking is nice.

- Adds a lot of unexplained junk binary files in your directory. It's probably in the docs somewhere I guess.

- The initial init makes some CHARM.md that tries to be helpful, but everything it had did not seem like helpful things I want the model to know. Simple stuff, like, my Go tests use PascalCasing, e.g. TestCompile.

- Ctrl+C to exit crashed my terminal.

jimmcslim

> The initial init makes some CHARM.md

Oh god please no... can we please just agree on a standard for a well-known single agent instructions file, like AGENT.md [1] perhaps (and yes, this is the standard being shilled by Amp for their CLI tool, I appreciate the irony there). Otherwise we rely on hacks like this [2]

[1] https://ampcode.com/AGENT.md

[2] https://kau.sh/blog/agents-md/

r0dms

I’ve been playing with Crush over the past few weeks and I’m genuinely bullish on its potential.

I've been following Charm for some time and they’re one of the few groups that get DX and that consistently ship tools that developers love. Love seeing them joining the AI coding race. Still early days, but this is clearly a tool made by people who actually use it.

bazhand

Very interesting to see so many new TUI tools for llm.

Opencode allows auth via Claude Max, which is a huge plus over requiring API (ANTHROPIC_API_KEY)

anonzzzies

Another one, but indeed very nice looking. Will definitely be testing it.

What I miss from all of these (EDIT: I see opencode has this for github) is the lack of being able to authenticate with the monthly paid services; github copilot, claude code, openai codex, cursor etc etc

That would be the best addition; I have these subscriptions and might not like their interfaces, so it would be nice to be able to switch.

MrGreenTea

I don't think most of these allow other tools to "use" the monthly subscription. Because of that you need an API key and have to pay per tokens. Even Claude code for a while did not use your Claude subscription.

anonzzzies

But now they have a subscription for claude code , copilot has a sub and some others too. They might not allow it, but whatever; we are paying, so what's the big deal.

NitpickLawyer

> LSP-Enhanced: Crush uses LSPs for additional context, just like you do

This is the most interesting feature IMO, interested to see how this pans out. The multiple sessions / project also seems interesting.

esafak

There are LSP MCPs so you can use them with other agents too.

NitpickLawyer

I'm not really into golang, but if I read this [1] correctly, they seem to append the LSP stuff to every prompt, and automatically after each tool that supports it? It seems a bit more "integrated" than just an MCP.

[1] - https://github.com/charmbracelet/crush/blob/317c5dbfafc0ebda...

tough

remarkably both Cursor and Zed do this (GUI, not CLI)

mbladra

Woah I love the UI. Compared to the other coding agents I've used (eg. Claude Code, aider, opencode) this feels like the most enjoyable to use so far.. Anyone try switching LLM providers with it yet? That's something I've noticed to be a bit buggy with other coding agents

bachittle

Bubble Tea has always been an amazing TUI. I find React TUI (which is what Claude Code uses) to be buggy and always have to work against it.

tbeseda

Agreed. Charm has a solid track record of great TUIs. While I appreciate a good DSL, I don't think React for a TUI (via ink) is working out well.

petesergeant

Yes, me too. The inline syntax highlighting is very nice. I hope CC steals liberally.