Show HN: Lightpanda, an open-source headless browser in Zig
137 comments
·January 24, 2025fbouvier
Author here. The browser is made from scratch (not based on Chromium/Webkit), in Zig, using v8 as a JS engine.
Our idea is to build a lightweight browser optimized for AI use cases like LLM training and agent workflows. And more generally any type of web automation.
It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them (DOM, XHR, Fetch). So expect most websites to fail or crash. The plan is to increase coverage over time.
Happy to answer any questions.
bityard
Please put a priority on making it hard to abuse the web with your tool.
At a _bare_ minimum, that means obeying robot.txt and NOT crawling a site that doesn't want to be crawled. And there should not be an option to override that. It goes without saying that you should not allow users to make hundreds or thousands of "blind" parallel requests as these tend to have the effect of DoSing sites that are being hosted on modest hardware. You should also be measuring response times and throttling your requests accordingly. If a website issues a response code or other signal that you are hitting it too fast or too often, slow down.
I say this because since around the start of the new year, AI bots have been ravaging what's left of the open web and causing REAL stress and problems for admins of small and mid-sized websites and their human visitors: https://www.heise.de/en/news/AI-bots-paralyze-Linux-news-sit...
hombre_fatal
This is HN virtue signaling. Some fringe tool that ~nobody uses is held to a different, weird standard and must be the one to kneecap itself with a pointless gesture and a fake ethical burden.
The comparison to DRM makes sense. Gimping software to disempower the end user based on the desires of content publishers. There's even probably a valid syllogism that could make you bite the bullet on browsers forcing you to render ads.
gkbrk
Please don't.
Software I installed on my computer needs to the what I want as the user. I don't want every random thing I install to come with DRM.
The project looks useful, and if it ends up getting popular I imagine someone would make a DRM-free version anyway.
tossandthrow
Where do you read DRM?
Parent commenter merely and humbly asks the author of the library to make sure that it has sane defaults and support for ethical crawling.
I find it disturbing that you would recommend against that.
GuB-42
Who told you about DRM? It is an open source tool.
Simply requiring a code change and a rebuild is enough of a barrier to prevent rude behavior from most people. You won't stop competent malicious actors but you can at least encourage good behavior. If popular, someone will make a fork but having the original refuse to do stuff that are deemed abusive sends a message.
It is like for the Flipper Zero. The original version does not let you access frequency bands that are illegal in some countries, and anything involving jamming is highly frowned upon. Of course, there are forks that let you do these things, but the simple fact that you need to go out of your way to find these should tell you it is not a good idea.
bityard
I feel like you may have a misunderstanding of what DRM is. Talking about DRM outside the context of media distribution doesn't really make any sense.
Yes, someone can fork this and modify it however they want. They can already do the same with curl, Firefox, Chromium, etc. The point is that this is project is deliberately advertising itself as an AI-friendly web scraper. If successful, lots of people who don't know any better are going to download it and deploy it without a full understanding (and possibly caring) of the consequences on the open web. And as I already point out, this is not hypothetical, it is already happening. Right now. As we speak.
Do you want cloudflare everywhere? This is how you get cloudflare everywhere.
My plea for the dev is that they choose to take the high road and put web-server-friendly SANE DEFAULTS in place to curb the bulk of abusive web scraping behavior to lessen the number of gray hairs it causes web admins like myself. That is all.
calvinmorrison
i still wont forgive libtorrent for not implementing sequential access.
and also, xpdf for implementing the "you cant select text" feature
mpalmer
If it's already a problem, nothing this developer does will improve it, including crippling their software and removing arguably legitimate use cases.
MichaelMoser123
That would make it impossible to use this as a testing tool. How should automatic testing of web applications work, if you obey all of these rules? There is also the problem of load testing. This kind of stuff is by its nature of dual use, a load test is also a kind of DDOS attack.
benatkin
Make it faster and furiouser.
There are so many variables involved that it’s hard to predict what it will mean for the open web to have a faster alternative to headless Chrome. At least it isn’t controlled by Google directly or indirectly (Mozilla’s funding source) or Apple.
internet_points
Yes! Having long time ago done some minor web scraping, I did not put any work at all into following robots.txt, simply because it seemed like a hassle and I thought "meh it's not that much traffic is it and boss wants this done yesterday". But if the tool defaulted to following robots.txt I certainly wouldn't have minded, it would have caused me to get less noise and my tool to behave better.
Also, throttling requests and following robots.txt actually makes it less likely that your scraper will be blocked, so even for those who don't care about the ethics, it's a good thing to have ethical defaults.
andrethegiant
This is why I’m making crawlspace.dev, a crawling PaaS that respects robots.txt, implements proper caching, etc by default.
cchance
Its literally open source, any effort put into hamstringing it would just be forked and removed lol
xena
Any barrier to abuse makes abuse harder.
bsnnkv
Looking at the responses here, I'm glad I just chose to paywall to protect against LLM training data collection crawling abuse.[1]
[1]: https://lgug2z.com/articles/in-the-age-of-ai-crawlers-i-have...
JoelEinbinder
When I've talked to people running this kind of ai scraping/agent workflow, the costs of the AI parts dwarf that of the web browser parts. This causes computational cost of the browser to become irrelevant. I'm curious what situation you got yourself in where optimizing the browser results in meaningful savings. I'd also like to be in that place!
I think your ram usage benchmark is deceptive. I'd expect a minimal browser to have much lower peak memory usage than chrome on a minimal website. But it should even out or get worse as the websites get richer. The nature of web scraping is that the worst sites take up the vast majority of your cpu cycles. I don't think lowering the ram usage of the browser process will have much real world impact.
fbouvier
The cost of the browser part is still a problem. In our previous startup, we were scraping >20 millions of webpages per day, with thousands of instances of Chrome headless in parallel.
Regarding the RAM usage, it's still ~10x better than Chrome :) It seems to be coming mostly from v8, I guess that we could do better with a lightweight JS engine alternative.
radium3d
As a web developer and server manager AI trainers scraping websites with no throttle is the problem. lol
cush
> there are hundreds of Web APIs, and for now we just support some of them (DOM, XHR, Fetch)
> it's still ~10x better than Chrome
Do you expect it to stay that way once you've reached parity?
nwienert
Playwright can run webkit very easily and it's dramatically less resource-intensive than Chrome.
Tostino
You may reduce ram, but also performance. A good JIT costs ram.
szundi
Then came deepseek
refulgentis
Generally, for consumer use cases, it's best to A) do it locally, preserving some of the original web contract B) run JS to get actual content C) post-process to reduce inference cost D) get latency as low as possible
Then, as the article points out, the Big Guns making the LLMs are a big use case for this because they get a 10x speedup and can begin contemplating running JS.
It sounds like the people you've talked to are in a messy middle: no incentive to improve efficiency of loading pages, simply because there's something else in the system that has a fixed cost to it.
I'm not sure why that would rule out improving anything else, it doesn't seem they should be stuck doing nothing other than flailing around for cheaper LLM inference.
> I think your ram usage benchmark is deceptive. I'd expect a minimal browser to have much lower peak memory usage than chrome on a minimal website.
I'm a bit lost, the ram usage benchmark says its ~10x less, and you feel its deceptive because you'd expect ram usage to be less? Steelmanning: 10% of Chrome's usage is still too high?
JoelEinbinder
The benchmark shows lower ram usage on a very simple demo website. I expect that if the benchmark ran on a random set of real websites, ram usage would not be meaningfully lower than Chrome. Happy to be impressed and wrong if it remains lower.
danielsht
Very impressive! At Airtop.ai we looked into lightweight browsers like this one since we run a huge fleet of cloud browsers but found that anything other than a non-headless Chromium based browser would trigger bot detection pretty quickly. Even spoofing user agents triggers bot detection because fingerprinting tools like FingerprintJS will use things like JS features, canvas fingerprinting, WebGL fingerprinting, font enumeration, etc.
Can you share if you've looked into how your browser fares against bot detection tools like these?
fbouvier
Thanks! No we haven't worked on bot detection.
sesm
Great job! And good luck on your journey!
One question: which JS engines did you consider and why you chose V8 in the end?
fbouvier
We have also considered JavaScriptCore (used by Bun) and QuickJS. We did choose v8 because it's state of the art, quite well documented and easy to embed.
The code is made to support others JS engine in the future. We do want to add a lightweight alternative like QuickJS or Kiesel https://kiesel.dev/
ksec
Thank You I was thinking of JSC and Bun as well. Was half expecting JSC since that combination seems to work well.
keepamovin
If you support Page.startScreencast or even just capture screenshot we could experiment with using this as a backend for BrowserBox, when lightpanda matures. Cool stuff!
returnofzyx
Hi. Can I embed this as library? Is there C API exposed? I can't seem to find any documentation. I'd prefer this to a CDP server.
fbouvier
Not now but we might do it in the future. It's easy to export a Zig project as a C ABI library.
returnofzyx
Oh please do. I'm sure there are many people like me who want this.
afk1914
I am curious how Lightpanda compares to chrome-headless-shell ({headless: 'shell'} in Puppeteer) in benchmarks.
fbouvier
We did not run benchmarks with chrome-headless-shell (aka the old headless mode) but I guess that performance wise it's on the same scale as the new headless mode.
toobulkeh
I’d love to see better optimized web socket support and “save” features that cache LLM queries to optimize fallback
psanchez
I think this is a really cool project. Scrapping aside, I would definitely use this with playwright for end2end tests if it had 100% compatibility with chrome and ran with a fraction of the time/memory.
At my company we have a small project where we are running the equivalent of 6.5 hours of end2end tests daily using playwright. Running the tests in parallel takes around half an hour. Your project is still in very early stages, but assuming 10x speed, that would mean we could pass all our tests in roughtly 3 min (best case scenario).
That being said, I would make use of your browser, but would likely not make use of your business offering (our tests require internal VPN, have some custom solution for reporting, would be a lot of work to change for little savings; we run all tests currently in spot/preemptible instances which are already 80% cheaper).
Business-wise I found very little info on your website. "4x the efficiency at half the cost" is a good catch phrase, but compared to what? I mean, you can have servers in Hetzner or in AWS and one is already a fraction of the cost of the other. How convenient is to launch things on your remote platform vs launch them locally or setting it up? does it provide any advantages in the case of web scrapping compared to other solutions? how parallelizable is it? Do you have any paying customers already?
Supercool tech project. Best of luck!
fbouvier
Thank you! Happy if you use it for your e2e tests in your servers, it's an open-source project!
Of course it's quite easy to spin a local instance of a headless browser for occasional use. But having a production platform is another story (monitoring, maintenance, security and isolation, scalability), so there are business use cases for a managed version.
frankgrecojr
The hello world example does not work. In fact, no website I've tried works. It's usually always panics. For the example in the readme, the errors are:
```
./lightpanda-aarch64-macos --host 127.0.0.1 --port 9222
info(websocket): starting blocking worker to listen on 127.0.0.1:9222
info(server): accepting new conn...
info(server): client connected
info(browser): GET https://wikipedia.com/ 200
info(browser): fetch https://wikipedia.com/portal/wikipedia.org/assets/js/index-2...: http.Status.ok
info(browser): eval script portal/wikipedia.org/assets/js/index-24c3e2ca18.js: ReferenceError: location is not defined
info(browser): fetch https://wikipedia.com/portal/wikipedia.org/assets/js/gt-ie9-...: http.Status.ok
error(events): event handler error: error.JSExecCallback
info(events): event handler error try catch: TypeError: Cannot read properties of undefined (reading 'length')
info(server): close cmd, closing conn...
info(server): accepting new conn...
thread 5274880 panic: attempt to use null value
zsh: abort ./lightpanda-aarch64-macos --host 127.0.0.1 --port 9222
```
krichprollsch
Lightpanda co-author here.
Thanks for opening the issue in the repo. To be clear here, the crash seems relative with a socket disconnection issue in our CDP server.
> info(events): event handler error try catch: TypeError: Cannot read properties of undefined (reading 'length')
This message relates to the execution of gt-ie9-ce3fe8e88d.js. It's not the origin of the crash.
I have to dig in, but it could be due to a missing web API.
lbotos
Not OP -- do you have some kind of proxy or firewall?
Looks like you couldn't download https://wikipedia.com/portal/wikipedia.org/assets/js/gt-ie9-... for some reason.
In my contributions to joplin s3 backend "Cannot read properties of undefined (reading 'length')" was usually when you were trying to access an object that wasn't instantiated. (Can't figure out length of <undefined>)
So for some reason it seems you can't execute JS?
zelcon
That's Zig for you. A ``modern'' systems programming language with no borrow checker or even RAII.
hansvm
Those statements are mostly true and also worth talking about, but they're not pertinent to that error (remotely provided JS not behaving correctly), or the eventual crash (which you'd cause exactly the same way for the same reason in Rust with a .unwrap() call).
IshKebab
Not exactly the same. `.unwrap()` will never lead to UB, but this can in Zig in release mode.
Also `unwrap()`s are a lot more obvious than just a ?. Dangerous operations should require more ceremony than safe ones. Surprising to see Zig make such a mistake.
jbggs
you shouldn't be unwrapping, error cases should be properly handled. users shouldn't see null dereference errors without any context, even in cli tools...
igorguerrero
You could build the same thing in Rust and have the same exact issue.
audunw
If that kind of stuff is always preferable, the nobody would use C over C++, yet to this day many projects still do. Borrow checking isn’t free. It’s a trade-off.
I mean, you could say Rust isn’t a modern language because it doesn’t use garbage collection. But it’s a nonsensical statement. Different languages serve different purposes.
Besides, Zig is focusing a lot more on heavily integrating testing, debug modes, fuzzing, etc. in the compiler itself, which when put together will catch almost all of the bugs a borrow checker catches, but also a whole ton of other classes of bugs that Rust doesn’t have compile time checks for.
I would probably still pick Rust in cases where it’s absolutely critical to avoid bugs that compromise security.
But this project isn’t that kind of project. I’d imagine that the super fast compile times and rapid iteration that Zig provides is much more useful here.
steeve
That has absolutely nothing to do with RAII or safety…
dang
(This was on the frontpage as https://news.ycombinator.com/item?id=42812859 but someone pointed out to me that it had been a Show HN a few weeks ago: https://news.ycombinator.com/item?id=42430629, so I've made a fresh copy of that submission and moved the comments hither. I hope that's ok with everyone!)
weinzierl
If I don't need JavaScript or any interactivity, just modern HTML + modern CSS, is there any modern lightweight renderer to png or svg?
Something in the spirit of wkhtmltoimage or WeasyPrint that does not require a full blown browser but more modern with support of recent HTML and CSS?
In a sense this is Lightpanda's complement to a "full panda". Just the fully rendered DOM to pixels.
nicoburns
We're working on this here: https://github.com/DioxusLabs/blitz See the "screenshot" example for rendering to png. There's no SVG backend currently, but one could be added.
(proper announcement of project coming soon)
cropcirclbureau
Pretty cool. Do you have a list of features you plan to support and plan to cut? Also, how much does this differ from the DOM impls that test frameworks use? I recall Jest or someone sporting such a feature.
fbouvier
The most important "feature" is to increase our Web APIs coverage :)
But of course we plan to add others features, including
- tight integration with LLM
- embed mode (as a C library and as a WASM module) so you can add a real browser to your project the same way you add libcurl
andrethegiant
Could it potentially fit in a Cloudflare worker? Workers are also V8 and can run wasm, but are constrained to 128MB RAM and 10MB zipped bundle size
fbouvier
WASM support is not there yet but it's on the roadmap and we had it in our mind since the beginning of the project, and have made our dev choices accordingly.
So yes it could be used in a serverless platform like Cloudflare workers. Our startup time is a huge advantage here (20ms vs 600ms for Chrome headless in our local tests).
Regarding v8 in Cloudflare workers I think we can not used directly, ie. we still need to embed a JS engine in the wasm module.
gwittel
Interesting. Looks really neat! How do you deal with anti bot stuff like Fingerprintjs, Cloudflare turnstile, etc? Maybe you’re new enough to not get flagged but I find this (and CDP) a challenge at times with these anti-bot systems.
zlagen
what do you think would be the use cases for this project? being lightweight is awesome but usually you need a real browser for most use cases. Testing sites and scraping for example. It may work for some scraping use cases but I think that if the site uses any kind of bot blocking this is not going to cut it.
fbouvier
There are a lot of uses cases:
- LLM training (RAG, fine tuning)
- AI agents
- scraping
- SERP
- testing
- any kind of web automation basically
Bot protection of course might be a problem but it depends also on the volume of requests, IP, and other parameters.
AI agents will do more and more actions on behalf of humans in the future and I believe the bot protection mechanism will evolve to include them as legit.
zlagen
thanks, it doesn't seem like it's the direction it's going at the moment. If you look at the robots.txt of many websites, they are actually banning AI bots from crawling the site. To me it seems more likely that each site will have its own AI agent to perform operations but controlled by the site.
evanjrowley
I'm interested to see if this could be made to work as a drop-in replacement for the headless Chromium that Hoarder uses to archive web content. I don't have a problem with the current Hoarder solution, but it would be nice to use something that requires less RAM.
null
Kathc
An open-source browser built from scratch is bold. What inspired the development of Lightpanda?
katiehallett
Thanks! The three of us worked together at our former company - ecomm saas start up where we spent a ton of $ on scraping infrastructure spinning up headless Chrome instances.
It started out as more of an R&D thesis - is it possible to strip out graphical rendering from Chrome headless? Turns out no - so we tried to build it from scratch. And the beta results validated the thesis.
I wrote a whole thing about it here if you're interested in delving deeper https://substack.thewebscraping.club/p/rethinking-the-web-br...
corford
Not sure what category of ecomm sites you were scraping but I scrape >10million ecomm URLs daily and, honestly, in my experience the compute is not a major issue (8 times out of 10 you can either use API endpoints and/or session stuffing to avoid needing a browser for every request; and in the 2 out of 10 sites where you really need a browser for all requests it's usually to circumvent aggressive anti-bot which means you're very likely going to need full chrome or FF anyway - and you can parallelise quite effectively across tabs).
One niche where I could definitely see a use for this though is scraping terribly coded sites that need some JS execution to safely get the data you want (e.g. they do some bonkers client side calculations that you don't want to reverse engineer). It would be nice to not pay the perf tax of chrome in these cases.
Having said all of that, I have to say from a geek perspective it's super neat what you guys are hacking on! Zig+V8+CDP bindings is very cool.
hansvm
> not pay the perf tax
I've typically used pyminiracer in such cases and provided some dummy window objects and whatnot as necessary for the script to succeed.
zlagen
fully agree here, using a browser for everything is the dumb way. You just usually use it to circumvent the blocking and then reuse the cookies to call the endpoints directly.
dolmen
Scrapping modern web pages is hard without full support for JS frameworks and dynamic loading. But a full browser, even headless, has huge ressource consumption. This has a huge cost when scraping at scale.
zelcon
Why didn't you just fork Chromium and strip out the renderer? This is guaranteed to bitrot when the web standards change unless you keep up with it forever and have perpetual funding. Yes, modifying Chromium is hard, but this seems harder.
fbouvier
It was my first idea. Forking Chromium has obvious advantages (compatibility). But it's not architectured for that. The renderer is everywhere. I'm not saying it's impossible, just that it did look more difficult to me than starting over.
And starting from scratch has other benefits. We own the codebase and thus it's easier for us to add new features like LLM integrations. Plus reducing binary size and startup time, mandatory for embedding it (as a WASM module or as C lib).
oever
The Chromium/Webkit renderer used to have multiple rendering backends. You might use or add a no-op backend.
cxr
> modifying Chromium is hard, but this seems harder
Prove it.
We’re Francis and Pierre, and we're excited to share Lightpanda (https://lightpanda.io), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.
Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?
Our browser is made of the following main components:
- an HTTP loader
- an HTML parser and DOM tree (based on Netsurf libs)
- a Javascript runtime (v8)
- partial web APIs support (currently DOM and XHR/Fetch)
- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).
The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).
In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.
It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.
We chose Zig for its seamless integration with C libs and its comptime feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib https://github.com/lightpanda-io/zig-js-runtime). And of course for its performance :)
As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.
We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?