Skip to content(if available)orjump to list(if available)

Show HN: BrowserAI – Run LLMs directly in browser using WebGPU (open source)

Show HN: BrowserAI – Run LLMs directly in browser using WebGPU (open source)

21 comments

·January 22, 2025

Check out this impressive project that enables running LLMs entirely in the browser using WebGPU.

Key features: - Zero token costs, no cloud infrastructure required - Complete data privacy through local processing - Simple 3-line code integration - Built on MLC and Transformer.js

The benchmarks show smaller models can effectively handle many common tasks.

Currently the project roadmap includes: - No-code AI pipeline builder - Browser-based RAG for document chat - Analytics/logging - Model fine-tuning interface

Philpax

This is a wrapper around WebLLM [0] and transformers.js [1]. What exactly are you offering on top of those two libraries?

[0]: https://github.com/mlc-ai/web-llm [1]: https://huggingface.co/docs/transformers.js/en/index

sauravpanda

This is the start so yes at the current state, we aren't offering much, if you check the GitHub repo, we don't directly use Transformers.js but have forked their code to ts and removed things that caused build issues in some frameworks like next, etc due to node modules.

We are adding features like RAG and observability integrations so people can use these llms to perform more complicated tasks!

Matthyze

When I read the title, I thought the project would be an LLM browser plugin (or something of the sort) that would automatically use the current page as context. However, after viewing the GitHub project, it seems like a browser interface for local LLMs. Is my understanding correct? This is not my domain of expertise.

shreyash_gupta

Yes, it's currently a framework for running LLMs locally in the browser. Browser extension for page context is in our roadmap, but right now we're focused on optimizing multimodal LLMs to work efficiently in the browser environment, so that we can use them for a variety of use cases.

bazmattaz

This is great. If I was a developer I would have two projects in mind for this;

1. Decline cookie notices automatically with a browser extension

2. Build a powerful autocorrect/complete browser extension to fix my poor typing skills

cloudking

sauravpanda

Would love to help, wanna give it a try to browser? I can help fix any issues you run with or just jump on a call!

This seems like the perfect use case for Browserai!

sauravpanda

Haha, we are thinking of 2. It makes sense, but I would love for you to check it out!

hazelnut

How does it compare to WebLLM (https://github.com/mlc-ai/web-llm)?

sauravpanda

We use Webllm under the hood and for text-to-text generation, the model compression is awesome and RAM usage is also less. But we are conducting more experiments, One thing we noticed is some quantized models using MLC sometimes start throwing gibberish, so will get back to you after more experiments on which is better.

janalsncm

I don’t see any encoders (BERT family) available yet. How will you do RAG, BM25/tf-idf?

sauravpanda

Oh yes, because the library was so large, we decided to start by removing some things and porting, to be honest, one of the bad decisions of my life trying to Port JS to TS but luckily it only took 3 days and a few headaches!

Will add the encoders as needed, should be easy now, but a great point.

3abiton

How's the performance and features compared to pinokio?

oxyboy

Would it be good for language translation?

shreyash_gupta

Yes, you can perform language translation using the supported large language models.

astlouis44

Deepseek R1 just got ported to WebGPU as well! Exciting future for local web AI:

Thread - https://news.ycombinator.com/item?id=42795782

sauravpanda

Yes, we do plan to add it soon, we are focusing on something cool right now! Stay tuned!

null

[deleted]

null

[deleted]

slalani304

[flagged]

shreyash_gupta

Thank you! Would love to hear your feedback!