Skip to content(if available)orjump to list(if available)

Launch HN: Continue (YC S23) – Create custom AI code assistants

Launch HN: Continue (YC S23) – Create custom AI code assistants

112 comments

·March 27, 2025

Hi HN. We are Nate and Ty, co-founders of Continue (https://www.continue.dev), which enables developers to create, share, and use custom AI code assistants. Today, we are launching Continue Hub and sharing what we’ve learned since our Show HN that introduced our open-source VS Code extension in July 2023 (https://news.ycombinator.com/item?id=36882146).

At Continue, we've always believed that developers should be amplified, not automated. A key aspect of this philosophy is providing choices that let you customize your AI code assistant to fit your specific needs, workflows, and preferences.

The AI-native development landscape constantly evolves with new models, MCP servers, assistant rules, etc. emerging daily. Continue's open architecture connects this ecosystem, ensuring your custom code assistants always leverage the best available resources rather than locking you into yesterday's technology.

The Continue Hub makes it even easier to customize with a registry for defining, managing, and sharing building blocks (e.g. models, rules, MCP servers, etc). These building blocks can be combined into custom AI code assistants, which you can use with our open-source VS Code and JetBrains extensions (https://github.com/continuedev/continue).

Here are a few examples of different custom AI code assistants that we’ve built to show how it works:

A custom assistant that specializes in helping with data load tool (dlt) using their MCP: https://www.loom.com/share/baf843d860f44a91b8c580063fcfbf4a?...

A custom assistant that specializes in helping with Dioxus using only models from Mistral: https://www.loom.com/share/87583774753045b1b3c12327e662ea38?...

A custom assistant that specializes in helping with LanceDB using the best LLMs from any vendor via their public APIs (Anthropic, Voyage AI, etc): https://www.loom.com/share/3059a35f8b6f436699ab9c1d1421fc8d?...

Over the last 18+ months since our Show HN, our community has rapidly grown to 25k+ GitHub stars, 12.5k+ Discord members, and hundreds of thousands of users. This happened because developers want to understand how their tools work, figure out how to better use them, and shape them to fit their development practices / environments. Continue does not constrain their creativity like the vertically integrated, proprietary black box AI code assistants that lack transparency and offer limited customizability.

Before Continue Hub, developers faced specific technical challenges when building custom AI assistants. They manually maintained separate configuration files for different models, wrestled with breaking API changes from providers, and built redundant context retrieval systems from scratch. We've seen teams spend weeks setting up systems that should take hours. Many developers abandoned the effort entirely, finding it impossible to keep up with the rapidly evolving ecosystem of models and tools.

Our open-source IDE extensions now read a standardized configuration format that fully specifies an AI code assistant's capabilities—from models and context providers to prompts and rules. Continue Hub hosts these configurations, syncs them with your IDE, and adds versioning, permissions, and sharing. Assistants are composed of atomic "blocks" that use a common yaml format, all managed through our registry with both free solo and paid team plans.

We're releasing Continue 1.0 today, which includes both Continue Hub and the first major release of our Apache 2.0 licensed VS Code and JetBrains extensions. While the Hub currently only supports our IDE extensions, we've designed the underlying architecture to support other tools in the future (https://blog.continue.dev/continue-1-0). The config format is intentionally tool-agnostic—if you're interested in integrating with it or have ideas for improvement, we'd love to hear your thoughts!

bhouston

As someone who has done a lot of work with agentic coding I am not sure specialized agents are the best solution. I think standardizing knowledge packs would be better that any agent can read to understand a domain or library is more useful. In particular this allows for an agent to know multiple domains at the same time.

Basically knowledge packs could be specified in each npm package.json or similar.

And we should view a knowledge pack as just a cache in a way. Because agents these days are capable of discovering that knowledge themselves, via web browsing and running tests, it is just costly to do so on every agent run or for every library they don't know.

I sort of view specialized agents as akin to micro services, great if you have perfect domain decomposition, but likely to introduce artificial barriers and become inconvenient as the problem domain shifts from the original decomposition design.

I guess I should write this up as blogpost or something similar.

EDIT: Newly written blog post here: https://benhouston3d.com/blog/crafting-readmes-for-ai

r_singh

> As someone who has done a lot of work with agentic coding

Can you please share what are your favourite tools and for what exactly? Would be helpful

I've been using Cline a lot with the PLAN + ACT modes and Cursor for the Inline Edits but I've noticed that for anything much larger than Claude 3.7's context window things get less reliable and it's not worth it anymore.

Have you found a way to share knowledge packs? Any conventions? How do you manage chat histories / old tasks and do you create documentation from it for future work?

bhouston

> Can you please share what are your favourite tools and for what exactly? Would be helpful

I wrote my own open source one here: https://github.com/drivecore/mycoder. Covered on hacker news here: https://news.ycombinator.com/item?id=43177117

I've also studied coding with it and wrote a lot about my findings here:

- https://benhouston3d.com/blog/lean-into-agentic-coding-mista...

- https://benhouston3d.com/blog/building-an-agentic-code-from-...

- https://benhouston3d.com/blog/agentic-coder-automation

- https://news.ycombinator.com/item?id=43177117

- https://benhouston3d.com/blog/the-rise-of-test-theater

My findings are generally that agentic coders are relatively interchangeable and the reason they work is primarily because of the LLM's intelligence and that is a result of the training they are undergoing on agentic coding tasks. I think that both LLMs and agentic coding tools are converging quite quickly in terms of capabilities.

> Have you found a way to share knowledge packs? Any conventions? How do you manage chat histories / old tasks and do you create documentation from it for future work?

I've run into this wall as well. I am working on it right now. :) Here is a hint of the direction I am exploring:

https://benhouston3d.com/blog/ephemeral-software-in-the-era-...

But using Github as external memory is a near term solution:

https://benhouston3d.com/blog/github-mode-for-agentic-coding

ukuina

> The recent productivity has been so high, it has the effect of making my previous efforts seem sort of pedestrian

Fantastic!

Agentic task-offloading is already here; it is just unevenly distributed.

yoz

Thanks so much for sharing your work. Your blog posts are _far_ more interesting and helpful than most of what I'm seeing about agentic coding.

I'm particularly fascinated by those last two links, along with your latest post about READMEs. It makes me wonder about a visual specification editor that provides GitHub-like task chronology around the spec, with the code as a secondary artefact (in contrast to GitHub, where code is primary).

r0b05

Very interesting. I'd like to give "Github" mode a try. Are you able to use some local instance instead?

CMCDragonkai

Are you using aider in your mycoder?

sestinj

On another note, this rang true to me:

> Basically knowledge packs should be specified in each npm package.json or similar.

Our YAML-based file format for assistants (https://docs.continue.dev/reference) is hoping to do just this by allowing you "import knowledge packs".

Does it need to be decoupled from package.json, etc.? One of the most interesting reasons we decided not to go that route was it can be cumbersome for all of your many dependencies to taken into account at once. Another is the question of how the ecosystem will evolve. I definitely think that each package author should take the time to encode the rules and best practices of using their library, however it might be difficult for community to help out if this is gated by getting in a pull request.

At the same time, one of the soon-to-be-released features we are working on is the ability to auto-generate or suggest rules (based on package.json, etc.).

bhouston

> Does it need to be decoupled from package.json, etc.?

Knowledge packs should be decoupled from package.json just as eslint rules or commit-lint rules are. You can include them in package.json or in separate files. But including pointers to the main files in a package.json helps with discovery.

All packages across all languages should support AI friendly knowledge packages so a level of decoupling is required.

EDIT: After thinking about it I think README.md should just be written with Agentic Coders in mind. I wrote up my thoughts on that here: https://benhouston3d.com/blog/crafting-readmes-for-ai

mentalgear

how are "knowledge packs" different than just the package's README ? (if present and well written, it should be as usable to devs as to an LLM, if not, maybe consider letting the LLM write it's own "Readme" for a package on the hub be scanning source/types of the package)

sestinj

I'd say rules are quite similar to a README, just tailored to LLMs, which often benefit from slightly different information than a human would. One way to think about the difference is that we as developers have the chance to build up memory/context over time, whereas LLMs are "memoryless" so you want to efficiently load all of the necessary high-level understanding.

> consider letting the LLM write it's own "Readme" for a package on the hub be scanning source/types of the package

This is something we're looking to ship soon

bhouston

I agree with you on the READMEs. In response to his suggestion that I write a blog post on the idea of knowledge packages, I just spend the last 30 minutes aligning on your suggestion by coincidence. Written up here:

https://benhouston3d.com/blog/crafting-readmes-for-ai

sestinj

We think about this a lot, and I think there are merits to the viewpoint. If I were to write a rule that said "make sure all code you write uses best practices", it should already be obvious enough to a good language model that this is always the case. It's "common knowledge". In some cases today there might be "common knowledge" that is a bit more rare, and the language model doesn't quite know this. I might agree that this could be obviated as well.

A situation to think about: if I were to write a rule that said "I am using Tailwind CSS for styling", then this is actually information that can't be just known. It's not "common knowledge", but instead "preference" or "personal knowledge". I do think it's a fair response to say "can't it just read my package.json"? Probably this works in a handful of cases, but I've come to find that even so there are a few benefits to custom rules that I expect to hold true regardless of LLM progress: - It's more efficient to read a rule than to call a tool to read package.json on every request - Especially in large enterprise codebases, the majority of knowledge is highly implicit (oftentimes in detrimental ways, but so the world works)

But yeah this is a majorly important and interesting question in my mind. What types of customization will last, and which won't? A blog post would be amazing

collingreen

I was expecting more than "have a readme" tbh

null

[deleted]

serjester

I think I have trouble understanding what this is doing other than maybe some fine-tuned prompts tailored to a specific stack? I'm looking at the data science kit and I don't see why anyone would use this, much less pay for it?

I guess you guys have some MCP connections too, but this seems like such a marginal value add (how often am I really pinging these services and do I really want an agent doing that).

Regardless congrats on the launch.

sestinj

This is fair feedback—we're so early in building an ecosystem here that the prompts we've shared as starting points are relatively general. But already we've seen people start to build much more carefully-crafted and specific assistants. These are some examples that we've used internally and found to be super useful:

- https://hub.continue.dev/continuedev/playwright-e2e-test

- https://hub.continue.dev/continuedev/vscode

- https://hub.continue.dev/continuedev/service-test-prompt

Importantly, you don't have to pay to use custom assistants! You can think of hub.continue.dev like the NPM registry: it's just a medium for people to create, share, and pull the assistants they want to use. It's always possible to bring your own API key and we provide a Models Add-On mostly for convenience of not needing to keep track of API keys

Value prop of MCP is definitely early, but I would recommend giving Agent mode (https://docs.continue.dev/agent/how-to-use-it) a try if you haven't had the chance—there's something compelling about having the model take the obvious actions for you (though you can of course require approval first!)

changhis

Congrats on the launch! I think this is totally the right next level abstraction for AI-assisted coding. Don’t generate everything from scratch but make it easy to plug in the tools you care about and make the generation way more accurate. Way to go!

johnisgood

What does this mean exactly? I checked the website, and it seems to have very specific assistants. I do not know if I personally could take advantage of this, unless there is going to be a "C assistant" or "OCaml assistant" (or just a "coder" one) or something.

sestinj

The goal of hub.continue.dev isn't to pre-build exactly what people will need (this might not be possible). We've started with a few examples for inspiration, but the hope is that hub.continue.dev makes it easier for developers to build assistants for themselves that match their personal needs

Even within the subset of developers that use C or OCaml, there are likely to be a large variety of best practices, codebase layouts, and internal knowledge—these are the things we want to empower folks to codify

johnisgood

Okay, that sounds cool. I hope I will be able to take advantage of this. Right now I am using LLMs (sadly not local, can't afford GPUs, my PC is pretty obsoleted).

sestinj

Yeah we think that MCP was a really solid building block at a lower level but ultimately a higher level abstraction is what will make customization really accessible. Being able to define rules, models, MCP servers, docs, prompts, data flow, all in one seems to be important

SlackingOff123

I use the Continue extension in both IntelliJ and VSCode and it's great. Although, I'm just connecting it to my own providers and not using your hub. So I'm more of a free-loader of the extension than a Continue customer. Anyway, thank you!

sestinj

I wouldn't say that's free-loader behavior :) It's exactly what we want to make possible—if you have strong reason to use your own models (price, convenience, security, remaining local, or other) then Continue is built for that

dcreater

Yes exactly. And that's precisely why I use Continue. The day you start force your own products/models on us is the day I leave.

talos_

Congrats on the launch HN!

I've been following the IDE + LLM space. What's Continue's differentiator vs GitHub Copilot, Cursor, Cline, Claude Desktop, etc. ?

What are you looking to build over the next year?

sestinj

Thank you! The biggest difference in our approach is the goal of allowing for custom assistants. What we've found is that most developers work in entirely different environments, whether that be IDE, tech stack, best practices, enterprise requirements, etc. The baseline features that we've come to expect from a coding assistant are amazing, but to truly meet people where they are it takes something different for everyone. Over the last two years we've seen tons of people customizing, and with hub.continue.dev we just want to make that accessible to all, and hope that people will share what they learn.

We're going to keep building at the edge of what's possible in the IDE, including focusing on Agent mode (https://docs.continue.dev/agent/how-to-use-it), next edit prediction, and much more. And we're going to keep making it easier to build custom assistants as the ecosystem grows!

prophesi

Very excited for this. The two painpoints I've had with LLM's lately:

- Mediocre knowledge of Erlang/Elixir

- Reluctance to use the new rune feature in Svelte 5

It sounds like I'd be able to point my local LLM to the official docs for Erlang, Elixir, Phoenix, and all the dependencies in my project. And same for Svelte 5.

TIPSIO

FYI I'm sure you're aware, Svelte has one of the best migration guides ever [1].

It's too large to be a Cursor rule though. But, if you dump it into Google Gemini (which is phenomenal at large context windows) it will write you a solid condensed version.

[1] https://svelte.dev/docs/svelte/v5-migration-guide

sestinj

I was actually talking to someone the other day who was building an MCP for Continue that could call the Elixir type checker / compiler (which I've heard is quite powerful). I'll need to find this and share—they were saying it made for a really powerful edit -> check -> rewrite loop in Agent mode

Also might be interesting to take a look at these Phoenix rules that someone built: https://hub.continue.dev/peter-mueller/phoenix-rules

prophesi

I didn't think of that! I'd definitely be interested in an Elixir MCP. And thank you for pointing out the Phoenix/Elixir block. You'll find me lurking in the Discord.

arevak

[dead]

atonse

I found that Copilot sucked for elixir.

But Cursor has been much better with suggestions for elixir codebases.

Probably still not as good as JS/Python but way way way better than Copilot.

jareds

What is the accessibility status of the Continue platform? I am a totally blind developer and have found Cursor to be an absolute mess when it comes to accessibility. There are enough tools available that I'd like to know about there accessibility ahead of time if possible instead of spending a bunch of time trying all of them out only to find they are not accessible.

sestinj

We have support for text-to-speech in the chat window and have also worked with developers who code entirely through voice and have been quite successful with Continue.

I don't claim that we're perfect and would love to hear how we can improve if you have the chance to give it a try

jareds

What would be the best way to provide feedback? I'm not sure when I will get a chance to look at Continue, but suspect it may be after comments are closed on this thread.

sestinj

If you want to keep in touch going forward, you're welcome to join our Discord or share a GitHub issue, we'll try to be quite responsive

dimal

I've enjoyed using Continue and really appreciated the focus on customizability.

But my problem with Continue has been the lack of stability. I'll often bounce from one tool to another, so I might not use it for a couple weeks. Almost every time I come back to it, the extension is broken in some way. I've reinstalled it many many times. I kinda gave up on it and stuck with Cody, even though I like the feature set of Continue better. (Cody eventually broke on me, too, but that's another can of worms.)

Is the Continue team aware of this stability issue and are you planning on focusing on that more now that you've launched? It seems like you've been moving fast and breaking things, which makes sense for a while, but I can't hitch my wagon to something that's going to break on me.

sestinj

I'm sorry you ran into problems repeatedly, and couldn't agree more. As much as we've aimed to innovate on customization, it does not have to be an either/or. We recognized this, heard a lot from the community, and the goal of our 1.0 was to focus ourselves very seriously on stability. Going forward we are going to continue to treat Continue as the foundational tool that we hope it can be. This has meant investment in testing, clearer contributing guidelines, and a general mindset shift to understand that AI code assistants have become fundamental to developer workflows and have to be rock solid (when something is doing work on every keystroke, it better not break!)

If you get the chance to try the 1.0, I'd love to hear whether you find it better, or if you think we can do even better—we think it's in a solid place and it'll always be improving from here.

dimal

That’s great to hear! I’ll give it another shot!

sqs

What broke on you when using Cody? Sorry to hear about that and want to fix it for you.

dimal

Thanks for asking. This may be more information than you were expecting, but here goes!

I tried to use it one day and was confronted by a cryptic auth error. It looked like a raw HTML error page but it was rendering in the side panel as plain text. So I tried logging out an logging back in again. That got me a different cryptic auth error. Then I noticed I had accidentally left my VPN on, so I turned that off, but the extension seemed to have gotten stuck in some state that it couldn't recover from. I'd either get an auth error, or it simply wouldn't ever complete auth. I even reinstalled, but couldn't get it to log me in.

So I contacted support. The experience didn't exactly spark joy. Once I got a response, the support person suggested I send more details, including logs. But they didn't say where I could find those logs. I'm a customer - how would I know where the logs are? Anyway, I uploaded a video of the bug on the web tracker, but later the support person said they never got it. The upload had apparently failed, but I didn't get an error when I uploaded it, so I didn't know that.

After I asked for the location of the logs, they sent me instructions for where to find them. But I was busy and couldn't respond for a few days, so then the system sent me an automatic message saying that since they hadn't heard from me, my issue would be closed. Ugh. I sent another email saying to keep the issue open. Then I sent the logs, and the support person told me to try logging in with a token and gave me instructions. That worked! It took about a week and a half to sort it out, so I asked for a refund for my trouble. I was told that no refund or credit would be given.

This left a sour taste in my mouth. It's not about the money. Credit for lost time using the service would have been around $2. It's more about what it means, that the company values my time and trouble, and this issue cost me a lot of time and trouble.

I hope that I'm not ruining this support person's day. My sense is that these kinds of things are usually due to training and policy, and they were probably just following their training.

It's a shame because Cody definitely has a much better UX than Continue. It does a lot of smart things by default that are really helpful. So I was ready to stick with it, but this experience definitely made me ready to try Continue again.

Hope this helps!

sqs

Thank you, and I’m sorry about that. Will look into this and fix on our side.

thelastbender12

Congrats on the release! I've been using Cursor but somewhat annoyed with the regular IDE affordances not working quite right (absence of pylance), and would love to go back to VSCode.

I'd love it if you lean into pooled model usage, rather than it being an addon. IMO it is the biggest win for Cursor usage - a reasonable num of LLM calls per month, so I never have to do token math or fiddle with api keys. Of course, it is available as a feature already (I'm gonna try Continue) but the difference in response time b/w Cursor and Github copilot (who don't seem to care) is drastic.

ctxc

Quick question, what do you mean by new pylance in cursor?

It's basically VSCode - I just ported all extensions in Cursor settings, it installed existing VSCode extensions including Pylance and it just works...

thelastbender12

I meant Pylance isn't legally available in Cursor (Vscode license restriction, which is justified). It broke very frequently, so I switched to Based pyright, which works but just not as well.

sestinj

Excited to hear how it goes for you!

Our Models Add-On is intended to give the same flat monthly fee as you’re accustomed to with other products. What did you mean by leaning into pooled, just making it more front-and-center?

thelastbender12

Yep, exactly that. IMO agent workflows, MCP and tool usage bits are all promising, but the more common usage of LLMs in coding is still chat. AI extensions in editors just make it simple to supply context, and apply diffs.

An addon makes it seem like an afterthought, which I'm certain you are not going for! But still making is as seamless as possible would be great. For ex, response time for Claude in Cursor is much better than even the Claude web app for me.

sestinj

This is a good callout, we'll definitely work to improve our messaging

FloorEgg

Can someone make an assistant for firestore security rules, and another for shadcn with the latest tailwindcss version? Like, yesterday?

These are the two cases where Claude 3.7/windsurf shits in my bed. :(

sestinj

Ooh +1 to both of these. We use shadcn as well :) and have been leveraging these docs: https://hub.continue.dev/vercel/shadcn-ui-docs, but there should totally be more in-depth rules for it and Firestore

outside1234

Having used the agentic Github Copilot in VSCode Insiders, it is hard to understand why this is necessary given how well that functions.

sestinj

I think Copilot plays an important role in the world of code assistants, and it's great that they've implemented Agent mode as well.

We'd actually love for them to take part in the standard we're building here—the more people build custom assistants together, the stronger the ecosystem can grow!

If I were to share any one reason why we continue to build Continue when there are so many other coding assistants, it really comes down to one word: choice. We hope to live in a world where developers can build custom prompts, rules, tools, etc. just like we currently build our own bash profiles, Vim shortcuts, and other personal or internal company tools. Lots of room and lots of space for many products, but we want to lead the way on allowing developer choice over models, prompts, and much, much more

addandsubtract

This looks great, but there's a bug on the "Remix" page that prevents me from actually customizing my own bot. Whenever there's a new API request to "/remix" (GET or POST), the form elements reset to their original value, making the changes impossible to save. At least in Firefox.

sestinj

Fix is released! Thanks for catching this so early

Should now be able to remix PyTorch rules for example: https://hub.continue.dev/starter/pytorch-rules/remix

sestinj

Thanks for the report, we should be able to fix this in the next hour or so. A workaround would be to copy the YAML definition displayed on the page of the assistant / block you want to remix. Will keep you updated!