Skip to content(if available)orjump to list(if available)

No AI* Here – A Response to Mozilla's Next Chapter

inkysigma

> Large language models are something else entirely*. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn't sit well.

Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.

I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.

jrjeksjd8d

To be more charitable to TFA, machine translation is a field where there aren't great alternatives and the downside is pretty limited. If something is in another language you don't read it at all. You can translate a bunch of documents and benchmark the result and demonstrate that the model doesn't completely change simple sentences. Another related area is OCR - there are sometimes mistakes, but it's tractable to create a model and verify it's mostly correct.

LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.

tdeck

Aside: Does anyone actually use summarization features? I've never once been tempted to "summarize" because when I read something I either want to read the entire thing, or look for something specific. Things I want summarized, like academic papers, already have an abstract or a synopsis.

user3939382

Firefox should look like Librewolf first of all, Librewolf shouldn’t have to exist. Mozilla’s privacy stuff is marketing bullshit just like Apple. It shouldn’t be doing ANYTHING that isn’t local only unless it’s explicitly opt in or user UI action oriented. The LLM part is absurd bc the entire overton window is in the wrong place.

Cheer2171

No, it is disqualifyingly clueless. The author defends one neural network, one bag of effectively-opaque floats that get blended together with WASM to produce non-deterministic outputs which are injected into the DOM (translation), then righteously crusades against other bags of floats (LLMs).

From this point of view, uBlock Origin is also effectively un-auditable.

Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.

koolala

I'm ok with Translation because it's best solved with AI. I'm not ok with it when Firefox "uses AI to read your open tabs" to do things that don't even need an AI based solution.

kbelder

There's levels of this, though, more than two:

    local, open model
    local, proprietary model
    remote, open model (are there these?)
    remote, proprietary model
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.

kevmo314

> Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.

This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/

The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.

jazzyjackson

It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.

I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)

bee_rider

I’m not too worried about starting to write like a bot. But, I do notice that I’m sometimes blunt and demanding when I talk to a bot, and I’m worried that could leak through to my normal talking.

I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.

Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.

kbelder

It's like using your turn signal even when you know there's nobody around you. Politeness is a habit you don't want to break.

kevmo314

Sure, I am more referring to advocating for Bergamot as a type of more "pure" solution.

I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.

hatefulheart

Your tone is kind of ridiculous.

It’s insane this has to be pointed out to you but here we go.

Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.

zdragnar

You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.

The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.

The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.

kevmo314

Yes I agree with this, but the blog post makes a much more aggressive claim.

> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.

Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.

The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.

_heimdall

Running locally does help get less modified output, bit how does it help escape the black box problem?

A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.

XorNot

Translation AI though has provable behavior cases though: round tripping.

An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.

No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).

charcircuit

That is not an ideal translation as it prioritizes round tripability over natural word choice or ordering.

CivBase

I think the author was close to something here but messed up the landing.

To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.

It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.

mmaunder

"...trust from other large, imporant [sic] third parties which in turn has given Waterfox users access to protected streaming services via Widevine."

The black box objection disqualifies Widevine.

clueless

This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...

[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

mindcrash

They are not "wanting" to introduce AI, they already did.

And now we have:

- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.

- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.

Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)

AuthAuth

All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like. And tons of people asked for that sidebar by the way.

We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.

move-on-by

For me, the complaint isn’t the AI itself, but the updated privacy policy that was rolled out prior to the AI features. Regardless of me using the AI features or not, I must agree to their updated privacy policy.

According to the privacy policy changes, they are selling data (per the legal definition of selling data) to data partners. https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...

koolala

Pay for what? It says it's a local AI model so how will AI companies be giving Firefox revenue from this?

Wowfunhappy

> [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

I don't want any of this built into my web browser. Period.

This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!

dotancohen

Reread your post with your evil PM hat on. You just said "I'm willing to pay for AI". That's all they hear.

infotainment

This 100% -- the AI features already in Firefox, for the most part, rely on local models. (Right now there is translation and tab-grouping, IIRC.)

Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.

_heimdall

Local models are nice for keeping the initial prompt and inference off someone else's machine, but there is still the question of what the AI feature will do with data produced.

I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.

BoredPositron

If we look at the last AI features they implemented it doesn't like they are betting on local models anymore.

Xelbair

>This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...

Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.

I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.

recursive

I don't feel like I want AI in my browser. I'm not sure what I'd do with it. Maybe translation?

clueless

yeah, translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts

recursive

If I have to fill a form for anything that matters, I'm doing it by hand. I don't even use the existing historical auto-complete stuff. It can fill stuff incorrectly. LLMs regularly get factual stuff wrong in mysterious ways when I engage with them as chat bots. It might be less effort to verify correctness than type in all the fields, but IMO there's less risk of missing or forgetting to check one of the fields.

nijave

Super charged search on page would also be nice

Agents (like a research agent) could also be interesting

actionfromafar

I like translation, it's come in handy a few times, and it's neat to know it's done locally.

ekr____

FWIW, Firefox already has AI-based translation using local models.

null

[deleted]

1shooner

I just know I've already had to chase down AI in Firefox I definitely did not ask for or activate, and I don't recall consenting to.

goalieca

The ux changes and features remind us of pocket and all the other low value features that come with disruptive ux changes as other commenters have noted.

Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.

xg15

I don't think a locally hosted LLM would be powerful enough for the supposed "agentic browsing" scenarios - at least if the browser is still supposed to run on average desktop PCs.

koolala

This is probably their plan to monetize this. They will partner with a AI company to 'enhance' the browser with a paid cloud model and the local model has no monetary incentive not to suck.

lxgr

Not yet, but we’ll hopefully get there within at most a few years.

Dylan16807

Get there by what mechanism? In the near term a good model pretty much requires a GPU, and it needs a lot of VRAM on that GPU. And the current state of the art of quantization has already gotten us most of the RAM-savings it possibly could.

And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.

someothherguyy

bigstrat2003

This is like when people defend Windows 11's nonsense by saying "you can disable or remove that stuff". Yes, you can. But you shouldn't have to, and I personally prefer to use tools which don't shove useless things into the tool because it's trendy.

derekdahmer

How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.

calvinmorrison

not to mention firefox routinely blows up any policies you set during upgrades, incompatibilities, and an endless about:config that is more opaque than a hunk of room temperature lard.

koolala

Is it really in all 4 of those places? Just need to change it in the first two, right? I hate the new AI tab feature and wish they had a non-AI option.

beached_whale

Easy for who? 99% of people are not going/able to setup firefox policies.

htx80nerd

I was a FF driver for ages and now making the switch to Chrome based browser simple because it's faster and websites are all tested against Chrome / Safari. I see both of these issues manifest IRL on a weekly basis. Why do I want to burn up CPU cycles and second using FF when Chromium is literally faster.

koolala

How do you disable the telemetry in Waterfox? It looks like they get their funding because they partnered with an Ad company. Do I just need to change the default search?

doubtfuly

On Windows Mozilla can't even handle disabling hardware acceleration, a.k.a. the GPU, from its settings page. Sure you can toggle the button but it doesn't work as verified in the task manager. What hope is there that they can be trusted to disable AI then? It's a feature that I'd never want enabled. When that "feature" comes out users will be forced to find a fork without the feature.

koolala

Did Firefox already add AI into Tabs? Today I just got my first 'Tab Grouping' and it says "Nightly uses AI to read your Open Tabs". That's the worst way to do grouping ever... just group hierarchically based on where it opened from...

Groxx

Particularly since they clearly keep this info around - if you install TreeStyleTabs or Sideberry, you'll see it immediately show the historical-structure of your current tabs (in-process at least, I'm not 100% sure about after kill->restore). That info has to come from somewhere.

koolala

I wish there was a Horizontal solution instead of vertical tabs. Maybe someone could mod their AI system with a non-AI backend.

zavec

I guess it's nice for non-technical people who don't know how to use `about:config` but beyond that I don't really see the need. Hopefully adding that extra layer of indirection doesn't mean the users will have to wait too long for security patches.

ekr____

PSA (for the nth time): about:config is not a supported way of configuring Firefox, so if you tweak features with about:config, don't be surprised if those tweaks stop working without warning.

autoexec

Mozilla tells you to use it so it so that seems supported enough to me (example: https://support.mozilla.org/en-US/kb/how-stop-firefox-making...)

That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.

ekr____

Ugh. Because they also say:

"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."

https://support.mozilla.org/en-US/kb/firefox-advanced-custom...

lerp-io

>A browser is meant to be a user agent, more specifically, your agent on the web.

at this point it’s more so a sandbox runtime bordering an OS, but okay

Papazsazsa

It's weird how people are upset about this but will happily gloat about replacing actual humans with AI.

bigstrat2003

It's not really weird that two different people say different things.

627467

i bet there's a big overlap between users of firefox and those who complain about humans being replaced with AI. so don't think its weird