Skip to content(if available)orjump to list(if available)

An open-source, extensible AI agent that goes beyond code suggestions

juunpp

It advertises that it runs locally and that it is "extensible" but then requires you to set up a remote/external provider as the first step of installation? That's a rather weird use of "local" and "extensible". Do words mean anything anymore?

tonygiorgio

Can’t you just run ollama and provide it a localhost endpoint? I dont think its within scope to reproduce the whole local LLM stack when anyone wanting to do this today can easily use existing better tools to solve that part of it.

demarq

Did you not see Ollama?

kylecazar

Yeah, they seem to be referring to the Goose agent/CLI that are local. Not models themselves.

alexkehayias

So I gave goose a whirl and I actually really like the approach they are taking, especially because I use emacs and not vscode. I would recommend people try it out on an existing project—the results are quite good for small, additive features and even ones that are full stack.

Here's a short writeup of my notes from trying to use it https://notes.alexkehayias.com/goose-coding-ai-agent/

XorNot

I don't know how useful this is, but my immediate reaction to the animation on the front page was "that's literally worse then the alternative".

Because the example given was "change the color of a component".

Now, it's obviously fairly impressive that a machine can go from plain text to identifying a react component and editing it...but the process to do so literally doesn't save me any time.

"Can you change the current colour of headercomponent.tsx to <some color> and increase the size vertical to 15% of vh" is a longer to type sentence then the time it would take to just open the file and do that.

Moreover, the example is in a very "standard" format. What happens if I'm not using styled components? What happens if that color is set from a function? In fact none of the examples shown seem gamechanging in anyway (i.e. the Confluence example is also what a basic script could do, or a workflow, or anything else - and is still essentially "two mouseclicks" rather then writing out a longer English sentence and then I would guess, waiting substantially more time for inferrence to run.

taneq

On the one hand, this isn’t a great example for you because you already knew how to do that. There’s probably no good way to automate trivial changes that you can make off the top of your head, and have it be faster than just doing it yourself.

I’ve found LLMs most useful for doing things with unfamiliar tooling, where you know what you want to achieve but not exactly how to do it.

On the other hand, it’s an okay test case because you can easily verify the results.

aantix

How does this compare to Cline or Cursor's Composer agent?