LLM Chat via SSH
19 comments
·June 14, 2025gbacon
Wow, that produced a flashback to using TinyFugue in the 90s.
amelius
I'd rather apt-get install something.
But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.
So yeah, I get why this exists.
ryancnelson
this is neat.... whose anthropic credits am i using, though? sonnet-4 isn't cheap! would i hit a rate-limit if i used this for daily work?
gclawes
Is this doing local inference? If so, what inference engine is it using?
demosthanos
No, it's a thin wrapper around an API, probably OpenRouter or similar:
https://github.com/ccbikai/ssh-ai-chat/blob/master/src/ai/in...
gsibble
We made this a while ago on the web:
ccbikai
I am the author, thank you for your support.
Welcome to help maintain it with me
t0ny1
does this project request to llm providers?
cap11235
Are you serious? Yeah, its using gemini 2.5 pro without a server, sure yeah.
eisbaw
Why not telnet?
kimjune01
hey i just tried it. it's cool! i wish it was more self aware
ccbikai
Thank you for your feedback; I will optimize the prompt
dncornholio
Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.
demosthanos
I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?
Skimming the source code I got really confused to see TSX files. I'd never seen Ink (React for CLIs) before, and I like it!
Previously discussions of Ink:
July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961
May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837
Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639