Skip to content(if available)orjump to list(if available)

Show HN: qqqa – a fast, stateless LLM-powered assistant for your shell

Show HN: qqqa – a fast, stateless LLM-powered assistant for your shell

31 comments

·November 6, 2025

I built qqqa as an open-source project, because I was tired of bouncing between shell, ChatGPT / the browser for rather simple commands. It comes with two binaries: qq and qa.

qq means "quick question" - it is read-only, perfect for the commands I always forget.

qa means "quick agent" - it is qq's sibling that can run things, but only after showing its plan and getting an approval by the user.

It is built entirely around the Unix philosophy of focused tools, stateless by default - pretty much the opposite of what most coding agent are focusing on.

Personally I've had the best experience using Groq + gpt-oss-20b, as it feels almost instant (up to 1k tokens/s according to Groq) - but any OpenAI-compatible API will do.

Curious if the HN crowd finds it useful - and of course, AMA.

d4rkp4ttern

I built a similar tool called “lmsh” (LM shell) that uses Claude-code non-interactive mode (hence no API keys needed, since it uses your CC subscription): it presents the shell command on a REPL like line that you can edit first and hit enter to run it. Used Rust to make it a bit snappier:

https://github.com/pchalasani/claude-code-tools?tab=readme-o...

It’s pretty basic, and could be improved a lot. E.g make it use Haiku or codex-CLI with low thinking etc. Another thing is have it bypass reading CLAUDE.md or AGENTS.md. (PRs anyone? ;)

baalimago

For inspiration (and, ofc, PR since I'm salty that this gets attention while my pet project doesn't), you can checkout clai[0] which works very similarly but has a year or so's worth of development behind it.

So feature suggestions:

* Pipe data into qq ("cat /tmp/stacktrace | qq What is wrong with this: "),

* Profiles (qq -profile legal-analysis Please checkout document X and give feedback)

* Conversations (this is simply appending a new message to a previous query)

[0]: https://github.com/baalimago/clai/blob/main/EXAMPLES.md

krzkaczor

This is nice. Reminds me how in warp terminal you can (could?) just type `# question` and it would call some LLM under the hood. Good UX.

iagooar

Thank you - appreciate it. I really tried to create something simple, that solve one problem really well.

RamtinJ95

This looks really cool and I love the idea but I will stick with opencode run ”query” and for specific agents which have specific models, I can just configure that also in an agent.md then add opencode run ”query” -agent quick

iagooar

I think it is more about what it doesn’t do. It is not a coding agent. It is a lightweight assistant, Unix style “Do One Thing and Do It Well”.

https://en.wikipedia.org/wiki/Unix_philosophy

armcat

One mistake in your README - groq throughput is actually 1000 tokens per "second" (not "minute"), for gpt-oss-20b.

iagooar

Nice catch - fixed!

CGamesPlay

Looks interesting! Does it support multiple tool calls in a chain, or only terminating with a single tool use?

Why is there a flag to not upload my terminal history and why is that the default?

iagooar

Thanks!

It does not support chaining multiple tool calls - if it did, it would not be a lightweight assistant anymore, I guess.

The history is there to allow referencing previous commands - but now that I think about it, it should clearly not be on by default.

Going to roll out a new version soon. Thanks for the feedback!

CGamesPlay

Given that it doesn't support multiple tool calls, one thing I noticed that is not ideal is that it seems to buffer stdout and stderr. This means that I don't see any output if the command takes 10 minutes, and I also can't see stdout mixed with stderr. It would be ideal to actually "exec" the target process instead, honestly. https://doc.rust-lang.org/std/os/unix/process/trait.CommandE...

NSPG911

very cool, can be useful for simple commands, but i find github cli's copilot extension useful for this, i just do `ghcs <question>` and it gives me an command, i can ask it how it works, or make it better, copy it, or run it

silentsanctuary

I like using ghcs for this as well! Or at least, I liked to - it's deprecated now, in favor of the new CLI which doesn't provide the same functionality.

https://github.com/github/gh-copilot/commit/c69ed6bf954986a0...

https://github.com/github/copilot-cli/issues/53

jcmontx

why use this and not claude code?

iagooar

"Do One Thing and Do It Well" - https://en.wikipedia.org/wiki/Unix_philosophy

Also, groq + gpt-oss is so much faster than Claude.

flashu

Good one, but I do not see release for MacOS :(

iagooar

Darwin is the MacOS release - should make that clear - will update readme. Thanks.

shellfishgene

I don't see any binaries on github?

flashu

That was my point, nothing in releases on GH

iagooar

And of course, if you find any bugs or feature requests, report them via issues on Github.

kissgyorgy

There is also the llm tool written by simonwillison: https://github.com/simonw/llm

I personally use "claude -p" for this

iagooar

Compared to the llm tool, qqqa is as lightweight as it gets. In the Ruby world it would be Sinatra, not Rails.

I have no interest in adding too many complex features. It is supposed to be fast and get out of your way.

Different philosophies.

dipittt

[flagged]