Skip to content(if available)orjump to list(if available)

Launch HN: Tweeks (YC W25) – Browser extension to de-enshittify the web

Launch HN: Tweeks (YC W25) – Browser extension to de-enshittify the web

62 comments

·November 13, 2025

Hey HN! We’re Jason & Matt and we’re building Tweeks (https://tweeks.io), a browser extension that lets you modify any website in your browser to add functionality, filter/highlight, re-theme, reorganize, de-clutter, etc. If you’ve used Violentmonkey/Tampermonkey, Tweeks is like a next‑generation userscript manager. Instead of digging through selectors and hand‑writing custom JS/CSS, describe what you want in natural language and Tweeks plans + generates your edits and applies them.

The modern web is so full of clutter and junk (banners, modals, feeds, and recommendations you didn’t ask for). Even a simple google search is guarded by multiple ads, an AI overview, a trending searches module, etc. before you even see the first real blue link.

Every day there's a new Lovable-like product (make it simple to build your own website/app) or a new agentic browser (AI agents click around and browse the web for you), but we built Tweeks to serve the middle ground: most of our time spent on the web is on someone else's site (not our own), and we don't want to offload everything to an agentic browser. We want to be able to shape the entire web to our own preferences as we browse.

I spent years working on recommendation systems and relevance at Pinterest, and understand how well-meaning recommendations and A/B tests can lead to website enshittification. No one sets out to make UX worse, but optimizing for an “average” user is not the same as optimizing for each individual user.

I’ve also been hacking “page fixers” as long as I can remember: remove a login wall here, collapse cookie banners there, add missing filters/highlights (first with F12/inspect element and eventually graduated to advanced GreaseMonkey userscripts). Tweeks started as a weekend prototype that turned simple requests into page edits but unexpectedly grew into something people kept asking to share. We hope you’ll like it too!

How it works: Open the Tweeks extension, type your request (e.g. “hide cookie banners and add a price/quality score”), and submit. Upon submission, the page structure is captured, an AI agent reviews the structure, plans changes, and returns deterministic transformations (selectors, layout tweaks, styles, and small scripts) that run locally. Your modifications persist across page loads and can be enabled/disabled, modified, and shared.

Here are a bunch of one‑shot examples from early users:

Youtube: Remove Youtube Shorts. Demo: http://youtube.com/watch?v=aL7i89BdO9o. Try it yourself: http://tweeks.io/share/script/bcd8bc32b8034b79a78a8564

Hacker News: Filter posts by title/url or points/comments, modify header and text size. Demo: http://youtube.com/watch?v=cD5Ei8bMmUk. Try it yourself: http://tweeks.io/share/script/97e72c6de5c14906a1351abd (filter), http://tweeks.io/share/script/6f51f96c877a4998bda8e781 (header + text).

LinkedIn: Keep track of cool people (extracts author data and send a POST request to a server). Demo: http://youtube.com/watch?v=WDO4DRXQoTU

Reddit: Remove sidebar and add a countdown timer that shows a blocking modal when time is up. Demo: http://youtube.com/watch?v=kBIkQ9j_u94. Try it yourself: http://tweeks.io/share/script/e1daa0c5edd441dca5a150c8 (sidebar), http://tweeks.io/share/script/c321c9b6018a4221bd06fdab (timer).

New York Times Games: Add a Strands helper that finds all possible words. Demo: http://youtube.com/watch?v=hJ75jSATg3Q. Try it yourself: http://tweeks.io/share/script/7a955c910812467eaa36f569

Theming: Retheme Google to be a 1970s CLI terminal. Demo: http://youtube.com/shorts/V-CG5CbYJb4 (oops sorry a youtube short snuck back in there). Try it yourself: http://tweeks.io/share/script/8c8c0953f6984163922c4da7.

We just opened access at https://tweeks.io. It’s currently free, but each use costs tokens so we'll likely need to cap usage to prevent abuse. We're more interested in early feedback than your money, so if you manage to hit the cap, message us at contact@trynextbyte.com or https://discord.gg/WucN6wpJw2, tell us how you're using it/what features you want next, and we'll happily reset it for you.

Btw if you do anything interesting with it, feel free to make a shareable link (go to ‘Library’ and press ‘share’ after generating) and include it in the comments below. It’s fun to see the different things people are coming up with!

We're rapidly shipping improvements and would love your feedback and comments. Thanks for reading!

gnarlouse

I don’t understand why this needs to be a y combinator project. Does the LLM prompt funnel my data out of the browser to Tweeks affiliates? Shouldn’t this just be an open source project?

rohansood15

I agree that it should be open-source, but I think it can still be a YC company. Improving the user experience on the web is definitely a billion-dollar market.

freshtake

This looks cool and could be a much needed step towards fixing the web.

Some questions:

[Tech]

1. How deep does the modification go? If I request a tweek to the YouTube homepage, do I need to re-specify or reload the tweek to have it persist across the entire site (deeply nested pages, iframes, etc.)

2. What is your test and eval setup? How confident are you that the model is performing the requested change without being overly aggressive and eliminating important content?

3. What is your upkeep strategy? How will you ensure that your system continues to WAI after site owners update their content in potentially adversarial ways? In my experience LLMs do a fairly poor job at website understanding when the original author is intentionally trying to mess with the model, or has overly complex CSS and JS.

4. Can I prompt changes that I want to see globally applied across all sites (or a category of sites)? For example, I may want a persistent toolbar for quick actions across all pages -- essentially becoming a generic extension builder.

[Privacy]

5. Where and how are results being cached? For example, if I apply tweeks to a banking website, what content is being scraped and sent to an LLM? When I reload a site, is content being pulled purely from a local cache on my machine?

[Business]

6. Is this (or will it be) open source? IMO a large component of empowering the user against enshittification is open source. As compute commoditizes it will likely be open source that is the best hope for protection against the overlords.

7. What is your revenue model? If your product essentially wrestles control from site owners and reduces their optionality for revenue, your arbitrage is likely to be equal or less than the sum of site owners' loss (a potentially massive amount to be sure). It's unclear to me how you'd capture this value though, if open source.

8. Interested in the cost and latency. If this essentially requires an LLM call for every website I visit, this will start to add up. Also curious if this means that my cost will scale with the efficiency of the sites I visit (i.e. do my costs scale with the size of the site's content).

Very cool.

Cheers

jmadeano

> 1. How deep does the modification go? If I request a tweek to the YouTube homepage, do I need to re-specify or reload the tweek to have it persist across the entire site (deeply nested pages, iframes, etc.)

If you're familiar with Greasemonkey, we work similar to the @match metadata. A given script could have a specific domain like (https://www.youtube.com/watch?v=cD5Ei8bMmUk) or all videos (https://www.youtube.com/watch*) or all of youtube (https://www.youtube.com/*) or all domains (https:///). During generation, we try to infer your intent based on your request (and you can also manually override with a dropdown.

> 2. What is your test and eval setup? How confident are you that the model is performing the requested change without being overly aggressive and eliminating important content?

Oh boy, don't get my started. We have not found a way to automate eval yet. We can automate "is there an error?", "does it target the right selectors", etc. But the request are open ended so there are 1M "correct" answers. We have a growing set of "tough" requests and when we are shipping a major change, we sit down, generate them all, and click through and manually check pass/fail. We built tooling around this so it is actually pretty quick but definitely thinking about better automation.

This is also where more users comes in. Hopefully you complain to us if it doesn't work and we get a better sense of what to improve!

> 3. What is your upkeep strategy? How will you ensure that your system continues to WAI after site owners update their content in potentially adversarial ways? In my experience LLMs do a fairly poor job at website understanding when the original author is intentionally trying to mess with the model, or has overly complex CSS and JS.

Great question. The good news is that there are things like aria labels that are pretty consistent. If the model picks the right selectors, it can be pretty robust to change. Beyond that, hopefully it is as easy as one update request ("this script doesn't work anymore, please update the selectors"). Though we can't really expect each user to do that, so we are thinking of an update system where e.g. if you install/copy script A, and then the original script A is updated, you can pull that new update. The final stage of this is an intelligent system where the script can heals itself (every so often, it assess the site, sees if selectors have changed and fixes itself) -> that is more long-term.

> 4. Can I prompt changes that I want to see globally applied across all sites (or a category of sites)? For example, I may want a persistent toolbar for quick actions across all pages -- essentially becoming a generic extension builder. Yes, if domain is https:/// it applies to all sites so you can think of this as a meta-extension builder. E.g. I have a timer script that applies across reddit, linkedin, twitter, etc. and keeps me focused.

> 5. Where and how are results being cached? For example, if I apply tweeks to a banking website, what content is being scraped and sent to an LLM? When I reload a site, is content being pulled purely from a local cache on my machine?

There is a distinction. When you generate a tweek, the page is captured and sent to an LLM. There is no way around this. You can't generate a modification for a site you cannot see.

The result of a generation is a static script that applies to the page across reloads (unless you disable it). When you apply a tweek, everything is local, there is no dynamic server communication.

Hopefully that is all helpful! I need to get to other replies, but I will try to return to finish up your business questions (those are the most boring anyway)

glenstein

Looks great, and a brilliant idea to bring back the Greasemonkey way of doing things. Also, perhaps the first practical use case for LLM-In-The-Browser I've seen in the wild (sidebars or AI startpages are very half-posterier'd ideas for what AI in the browser should mean imo).

Like some others here, Firefox is my daily driver and would look forward to anything you could bring our way.

jmadeano

Thanks! I've tried my share of Agentic Browsers, sidebars, etc. Most of them don't work that well, and even as they get better, I am just generally not sold on the vision. Sure, there are some amount of "chores" that I need to do on the web that I wouldn't mind automating/offloading, but I also genuinely enjoying browsing the web. I don't want a future where AI agents do all the browsing for us.

So we built this to hopefully make browsing the web more enjoyable for us humans that remain :)

And I'm with you on Firefox. I'd love to be able to go back to Firefox as my daily driver. Will try to prioritize it!

charlesabarnes

Its a great idea, I'm cautious to install this because I don't know how to monetize this for the long haul. I'd love to hear your thoughts on local models vs something hosted for this.

jmadeano

I'm a big fan of local myself, but unfortunately the local models aren't there yet. Even of the closed-source models, many surprisingly struggle with relatively simple requests in this domain.

Don't get me wrong, there are a lot more iterations of tool + prompt + agent flow updates we can and will do to make things even better, and the models will keep getting better themselves, but the task is non-trivial. If you download the raw HTML of a webpage, it's a messy jungle, and frankly impressive that the models are capable of doing anything useful with it

gnarlouse

Don’t let your board sell a free version where the reclaimed screen real estate is converted into ads.

smashah

Awesome! I love any project that re-empowers users, ToS be damned. Regreatify the Web & Godspeed!

nidegen

Gotta call it deshittify

bradly

Where is your privacy policy and terms of service? I do not see either on your site.

jmadeano

Oh great point! We do have the privacy policy included directly on the site but I cut out a lot of the onboarding content if you don't have the extension installed. Working on it now!

Edit: The site is an entangled mess of state machine and I don't want to break anything right now (+ I'm trying to keep up with all the comments + traffic) so I can just put it here for now: https://www.tweeks.io/privacy

We care a lot about privacy and tried to keep everything as minimal as possible. Definitely open to feedback here!

bradly

> Definitely open to feedback here!

Sure.

Know your audience. HN users are going to be focused on two things: how the your browers data is used and how you stop an agent from taking account numbers, inputted passwords, etc.

From the linked privacy policy:

   > Share data with third parties except our API service
It would be helpful for you to share the privacy policy of the API service as well.

   > When you use our script generation functionality:
   >    Generated Code: We retain rights to use, modify, distribute, and commercialize any scripts generated by our service
   >    Sharing Rights: Generated scripts may be used to improve our services, shared as examples, or incorporated into our script library
Anything you make is or can become public. I would revisit this decision and prioritize keeping users' data private.

Also, I would encourage you to understand your technology, even your marketing site, to be able to add a link to Privacy Policy and ToS in the footer without the burden of "an entangled mess of state machine" and the risk of breaking anything. If the marketing site technology is outside the scope of your expertise, consider how much worse would a static page would be?

jmadeano

> It would be helpful for you to share the privacy policy of the API service as well.

We have standard data processing agreements with any and all LLM providers that we use. These include do not train/retain provisions (whether you trust them is another question entirely).

> Anything you make is or can become public. I would revisit this decision and prioritize keeping users' data private.

Totally valid. We haven't acted on this clause (scripts are not shared unless your yourself enable sharing) so probably best to remove it. To be clear though, your page data is your own. That will never be shared (not even you yourself can opt to share that because the privacy concerns are too great). The generated scripts are much safer (generally boils down to a bunch of static CSS selectors, styles, etc.). Nonetheless, a valid point.

> Also, I would encourage you to understand your technology, even your marketing site, to be able to add a link to Privacy Policy and ToS in the footer without the burden of "an entangled mess of state machine" and the risk of breaking anything. If the marketing site technology is outside the scope of your expertise, consider how much worse would a static page would be?

Fair comment, fwiw we did ship it in the footer already :) For the standard site, when the extension is installed, there are 6 steps. Each step dynamically progresses based on your install state (installed, pinned, permissions granted, first generation, etc.) We put a lot into the onboarding experience and it is pretty complicated (happy to geek out over the details!), but we hide all this if the extension isn't actively installed. Unfortunately, my blunder was that one of those steps that was hidden includes the privacy policy.

Thanks for all the feedback!

noir_lord

Indeed.

> Instead of digging through selectors and hand‑writing custom JS/CSS

Some of us like that or at least the exact control it gives us Vs installing an extension that has access to my entire browser infrastructure and those terms.

I suspect many HN readers aren't the target market for this.

null

[deleted]

wouldbecouldbe

I let GPT build a quick extension just a few weeks ago. It destroys instagram, linkedin and removes shorts from youtube. It's super easy, mostly just injects css into certain sites. Works great! I prefer it over trusting a third party with everything I do, those extensions have a scary amount of access and I never know who runs them.

jmadeano

I run this one, but valid that you don't know or trust me ;)

Totally hear you on the permissions/access, and there isn't really a workaround:

In order for us to be able to execute your scripts that do powerful things (send notifications, save to local storage, download things, etc.), our extension needs to have those permissions itself.

I started off doing the same as you, having GPT to write scripts for me, and you can go a long way with that. I personally ran into the ceiling and felt I could build out a more robust solution, but it serves your needs well, by all means

andy_ppp

I would love it if I could process the actual contents of the feed with some rules... for example "Hide tweets about politics or woke/anti-woke culture wars or generally things designed to wind me up including replies to my tweets".

jmadeano

We'd love to do something like that! We can currently do things like "Hide content that mentions the word {X}" or "Hide content from {author}". Basically, behind the scenes it will implement a set of keywords to filter.

The limitation here is that the AI agent sees your page once and has to write a static script that applies generically.

What you're requesting would require an LLM call on every page request dynamically (rather than a static generated script) to categorize the content. It is possible and something we want to achieve, but we're not there quite yet.

mibressler

This seems awesome

aspect0545

Chrome only, that’s too bad

jmadeano

I agree. I'm a firefox guy myself and it's been painful shifting my workload to chrome for testing + developing this. The extension has a lot of browser engine complexity (and unfortunately us non-chromium folks seem to be a dying breed) so I haven't been able to justify implementing cross-browser support yet. Hopefully soon!

codeptualize

You might be able to port it fairly easily, depending on the browser extension api's you are using.

Web extensions API is emerging and a lot of it is already somewhat standardized https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

Just some different fields in the manifest, and there are specifics that work completely different or are not available (for example favicons).

I have tried Chrome -> Firefox before and it was surprisingly easy. Safari is more difficult in my experience, it's missing complete API's like the bookmarks one.

jmadeano

It is definitely possible, but not straightforward. With Manifest V3, the only way you can do this stuff is with the browser userScripts API. That is the only way you can execute remote code within the browser (and each script is considered "remote code").

These changes are the reason many of the existing userscript managers stopped working/being developed after MV3 went live. It is a real pain in the butt and unfortunately the functionality is not exactly the same between chrome and the generic browser API that firefox uses. There are a lot of edge cases that make everything even more of a pain.

Life would be much better (in many ways) if chrome didn't force MV3 down our throats.

andy_ppp

Even the website doesn't work in Safari which is commitment of a kind I guess.