Launch HN: Onyx (YC W24) – The open-source chat UI
30 comments
·November 25, 2025dannylmathews
The license on this project is pretty confusing. The license at the root of the project links to backend/cc/LICENSE.md which says you need a subscription license to use the code.
Can you call it open source if you need a subscription license to run / edit the code?
asdev
Aren't most of the large frontier model providers SOC 2 compliant? I think AWS Bedrock is also SOC 2 compliant. Not sure why you would need to self host anything then as you'll get turnkey secure solutions from the bigger plays
hobofan
Congrats on the launch!
We are building a competing open source tool[0] with a very similar focus (strongly relying on interoperable standards like MCP; built for enterprise needs, etc.), though bootstrapping with customers rather than being VC funded. It's nice to see a competitor in the field following similar "OSS Friends" principles, while many of the other ones seem to have strong proprietary tendencies.
(Small heads up: The "view all integrations" button goes to a 404)
_pdp_
> We’re building an open-source chat that works
As long as you have Pricing on your website your product is not open source in the true spirit of open sourceness. It is open code for sure but it is a business and so incentive is to run it like a business which will conflate with how the project is used by the community.
Btw, there is nothing wrong with that but let's be honest here if you get this funded (perhaps it already is) who are you going to align your mission with - the open source community or shareholders? I don't think you can do both. Especially if a strong competitor comes along that simply deploys the same version of the product. We have seen this story many times before.
Now, this is completely different from let's say Onyx being an enterprise search product where you create a community-driven version. You might say that fundamentally it is the same code but the way it is presented is different. Nobody will think this is open-source but more of "the source is available" if you want to check.
I thought perhaps it will benefit to share this prospective here if it helps at all.
Btw, I hear good things about Onyx and I have heard that some enterprises are already using it - the open-source version.
wg0
Actually - if you have bunch of VCs on your back, you can't even align with your very own user base let alone any other wider community.
tomasphan
This is great, the value is there. I work for a F100 company that is trying (and failing) to build this in house because every product manager fundamentally misunderstands that users just want a chat window for AI, not to make their own complicated agents. Your biggest competition in the enterprise space, Copilot, has terrible UI and we only put up with it because it has access to email, SharePoint and Teams.
Weves
Haha, yea we've seen that exact story many times! Dissatisfied with Copilot and building a (not great) internal solution that is missing polish + most of the "advanced" feature set.
katzskat
I immediately thought of Google's Agentspace when I saw this product. The value for me sits in its ability to do RAG via connectors.
Weves
RAG + connectors is a huge reason why people deploy Onyx (enterprise search roots means we do a pretty good job there).
Also, open-source works really here, since connectors are a long-tail game. We've tried to make it easy to add connectors (a single python interface), and as a result over half of our connectors are contributed by the community. We expect that percentage to grow over time. This means that compared to something like Agentspace, we'll very likely be connected to all of the key tools at your company (and if we aren't, you can easily add an integration).
rao-v
I was pretty excited for Onyx as a way to stand up a useful open source RAG + LLM at small scale but as of two weeks ago it was clearly full of features ticked off a list that nobody has actually tried to use. For example, you can scrape sites and upload docs but you can’t really keep track of what’s been processed within the UI or map back to the documents cleanly.
It’s nice to see an attempt at an end to end stack (for all that it seems this is “obvious” … there are not that many functional options) but wow we’ve forgotten the basis of making useful products. I’m hoping it gets enough time to bake.
Weves
Really appreciate the feedback (and glad to hear the core concept resonated with you).
The admin side of the house has been missing a bit of love, and we have a large overhaul coming soon that I'm hoping addresses some (most?) of your concerns. For now, if you'd like to view documents that have been processed, you can check out the `Explorer` panel on the left.
In general, I'd love to hear more about what gives it that "unbaked" feel for you if you're up for a quick chat.
panki27
What does this do that OpenWebUI (or one of the many of other solutions) does not?
pablo24602
Congrats on the launch! Every enterprise deserves to use a beautiful AI chat UI (and Onyx is a fantastic and easy to try option).
dberg
how is this different from Librechat?
Weves
Some of the key differences:
1/ large connector suite + good RAG. Answers at scale is hard, and from our enterprise search roots, we've spent a lot of time with it. It's something that many teams expect from their chat UI.
2/ deep research + open-source code interpreter.
3/ simpler UX. LibreChat has a lot of customizability exposed front and center to the user, which is great for the power user but can be overwhelming for someone new to using AI systems.
hobofan
LibreChat has recently been acquired by ClickHouse, so who knows what their future holds.
mentalgear
A bit like mastra.ai - my goto SOTA solution for these kind of LLM flow coordinations (though more dev-focused). (yes I realise this is more user-facing)
KaoruAoiShiho
I've been using Cherry Studio, works great.
Hey HN, Chris and Yuhong here from Onyx (https://github.com/onyx-dot-app/onyx). We’re building an open-source chat that works with any LLM (proprietary + open weight) and gives these LLMs the tools they need to be useful (RAG, web search, MCP, deep research, memory, etc.).
Demo: https://youtu.be/2g4BxTZ9ztg
Two years ago, Yuhong and I had the same recurring problem. We were on growing teams and it was ridiculously difficult to find the right information across our docs, Slack, meeting notes, etc. Existing solutions required sending out our company's data, lacked customization, and frankly didn't work well. So, we started Danswer, an open-source enterprise search project built to be self-hosted and easily customized.
As the project grew, we started seeing an interesting trend—even though we were explicitly a search app, people wanted to use Danswer just to chat with LLMs. We’d hear, “the connectors, indexing, and search are great, but I’m going to start by connecting GPT-4o, Claude Sonnet 4, and Qwen to provide my team with a secure way to use them”.
Many users would add RAG, agents, and custom tools later, but much of the usage stayed ‘basic chat’. We thought: “why would people co-opt an enterprise search when other AI chat solutions exist?”
As we continued talking to users, we realized two key points:
(1) just giving a company secure access to an LLM with a great UI and simple tools is a huge part of the value add of AI
(2) providing this well is much harder than you might think and the bar is incredibly high
Consumer products like ChatGPT and Claude already provide a great experience—and chat with AI for work is something (ideally) everyone at the company uses 10+ times per day. People expect the same snappy, simple, and intuitive UX with a full feature set. Getting hundreds of small details right to take the experience from “this works” to “this feels magical” is not easy, and nothing else in the space has managed to do it.
So ~3 months ago we pivoted to Onyx, the open-source chat UI with:
- (truly) world class chat UX. Usable both by a fresh college grad who grew up with AI and an industry veteran who’s using AI tools for the first time.
- Support for all the common add-ons: RAG, connectors, web search, custom tools, MCP, assistants, deep research.
- RBAC, SSO, permission syncing, easy on-prem hosting to make it work for larger enterprises.
Through building features like deep research and code interpreter that work across model providers, we've learned a ton of non-obvious things about engineering LLMs that have been key to making Onyx work. I'd like to share two that were particularly interesting (happy to discuss more in the comments).
First, context management is one of the most difficult and important things to get right. We’ve found that LLMs really struggle to remember both system prompts and previous user messages in long conversations. Even simple instructions like “ignore sources of type X” in the system prompt are very often ignored. This is exacerbated by multiple tool calls, which can often feed in huge amounts of context. We solved this problem with a “Reminder” prompt—a short 1-3 sentence blurb injected at the end of the user message that describes the non-negotiables that the LLM must abide by. Empirically, LLMs attend most to the very end of the context window, so this placement gives the highest likelihood of adherence.
Second, we’ve needed to build an understanding of the “natural tendencies” of certain models when using tools, and build around them. For example, the GPT family of models are fine-tuned to use a python code interpreter that operates in a Jupyter notebook. Even if told explicitly, it refuses to add `print()` around the last line, since, in Jupyter, this last line is automatically written to stdout. Other models don’t have this strong preference, so we’ve had to design our model-agnostic code interpreter to also automatically `print()` the last bare line.
So far, we’ve had a Fortune 100 team fork Onyx and provide 10k+ employees access to every model within a single interface, and create thousands of use-case specific Assistants for every department, each using the best model for the job. We’ve seen teams operating in sensitive industries completely airgap Onyx w/ locally hosted LLMs to provide a copilot that wouldn’t have been possible otherwise.
If you’d like to try Onyx out, follow https://docs.onyx.app/deployment/getting_started/quickstart to get set up locally w/ Docker in <15 minutes. For our Cloud: https://www.onyx.app/. If there’s anything you'd like to see to make it a no-brainer to replace your ChatGPT Enterprise/Claude Enterprise subscription, we’d love to hear it!