Skip to content(if available)orjump to list(if available)

LLM Chat via SSH

LLM Chat via SSH

19 comments

·June 14, 2025

demosthanos

Skimming the source code I got really confused to see TSX files. I'd never seen Ink (React for CLIs) before, and I like it!

Previously discussions of Ink:

July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961

May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837

Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639

gbacon

Wow, that produced a flashback to using TinyFugue in the 90s.

https://tinyfugue.sourceforge.net/

https://en.wikipedia.org/wiki/List_of_MUD_clients

amelius

I'd rather apt-get install something.

But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.

So yeah, I get why this exists.

ryancnelson

this is neat.... whose anthropic credits am i using, though? sonnet-4 isn't cheap! would i hit a rate-limit if i used this for daily work?

gclawes

Is this doing local inference? If so, what inference engine is it using?

demosthanos

No, it's a thin wrapper around an API, probably OpenRouter or similar:

https://github.com/ccbikai/ssh-ai-chat/blob/master/src/ai/in...

gsibble

We made this a while ago on the web:

https://terminal.odai.chat

ccbikai

I am the author, thank you for your support.

Welcome to help maintain it with me

t0ny1

does this project request to llm providers?

cap11235

Are you serious? Yeah, its using gemini 2.5 pro without a server, sure yeah.

eisbaw

Why not telnet?

accrual

I'd love to see an LLM outputting over a Teletype. Just tschtschtschtsch as it hammers away the paper feed.

cap11235

Last week or so, there was the LLM finetune posted that speaks like a 19th century Irish author. I look forward a bit to having an LLModem model.

RALaBarge

No HTTPS support

benterix

I bet someone can write an API Gateway for this...

kimjune01

hey i just tried it. it's cool! i wish it was more self aware

ccbikai

Thank you for your feedback; I will optimize the prompt

dncornholio

Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.

demosthanos

I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?