Skip to content(if available)orjump to list(if available)

The force-feeding of AI features on an unwilling public

dang

I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)

It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)

spacemadness

Having seen the almost rabid and fearful reactions of product owners first hand around forcing AI into every product, it’s because all these companies are in panic mode. Many of these folks are not thinking clearly and have no idea what they’re doing. They don’t think they have time to think it through. Doing something is better than nothing. It’s all theatre for their investors coupled with a fear of being seen as falling behind. Nobody is going to have a measured and well thought through approach when they’re being pressured from above to get in line and add AI in any way. The top execs have no ideas, they just want AI. You’re not even allowed to say it’s a bad idea in a lot of bigger companies. Get in line or get a new job. At some point this period will pass and it will be pretty embarrassing for some folks.

recursive

The product I'm working on is privately owned. Hence no investors. We're still in the process of cramming AI into everything.

AppleBananaPie

Yeah the internal 'experts' pushing ai having no idea what they're doing but acting like they do is like a weird fever dream lol

Everyone nodding along, yup yup this all makes sense

mouse_

eventually, people (investors) notice when their money is scared...

echelon

Companies that don't invent the car get to go extinct.

This is the next great upset. Everyone's hair is on fire and it's anybody's ball game.

I wouldn't even count the hyperscalers as certain to emerge victorious. The unit economics of everything and how things are bought and sold might change.

We might have agents that scrub ads from everything and keep our inboxes clean. We might find content of all forms valued at zero, and have no need for social networking and search as they exist today.

And for better or worse, there might be zero moat around any of it.

delta_p_delta_x

> agents that scrub ads from everything

This is called an ad blocker.

> keep our inboxes clean

This is called a spam filter.

The entire parent comment is just buzzword salad. In fact I am inclined to think it was written by an LLM itself.

pera

At work we started calling this trend clippification for obvious reasons. In a way this aligns with your comment: The information provided by Clippy was not necessarily useless, nevertheless people disliked it because (i) they didn't ask for help (ii) and even if by any chance they were looking for help, the interaction/navigation was far from ideal.

Having all these popups announcing new integrations with AI chatbots showing up while you are just trying to do your work is pretty annoying. It feels like this time we are fighting an army of Clippies.

CuriouslyC

I am a huge AI supporter, and use it extensively for coding, writing and most of my decision making processes, and I agree with you. The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.

I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.

Avamander

> The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.

The weirdest location I've found the most useful LLM-based feature so far has been Edge with it's automatic tab grouping. It doesn't always pick the best groups and probably uses some really small model, but it's significantly faster and easier than anything that I've had so far.

I hope they do bookmarks next and that someone copies the feature and makes it use a local model (like Safari or Firefox, I don't even care).

Xss3

[flagged]

null

[deleted]

mpalmer

Meanwhile you aren't even using AI and you hallucinated the word "outsource" in their comment.

alganet

The major issue with AI technology is the people. The enthusiasts that pretend issues don't exist, the cheap startups trying to sell snake oil.

The AI community treats potential customers as invaders. If you report a problem, the entire thing turns on you trying to convince you that you're wrong, or that you reported a problem because you hate the technology.

It's pathetic. It looks like a viper's nest. Who would want to do business with such people?

LgLasagnaModel

Good point. Also, the fact that I’m adamant that one cannot fly a helicopter to the moon doesn’t mean that I think helicopters are useless. That said, if I’m inundated everyday with people insisting that one CAN fly a helicopter to the moon or that that capability is just around the corner, I might get so fed up that i say F it, I don’t want to hear another F’ing word about helicopters even though I know that helicopters have utility.

alganet

It's an unholy chimera. As militant as GNU, as greedy as Microsoft, as viral as fidget spinners. The worst aspects of each of those communities.

Actual promising AI tech doesn't even get the center stage, it doesn't get a chance to do it.

827a

Couldn’t agree more. There are awesome use-cases for AI, but Microsoft and Google needed to shove AI everywhere they possibly could, so they lost all sense of taste and quality. Google raised the price of Workspace to account for AI features no one wants. Then, they give away access to Gemini CLI for free to personal accounts, but not Workspace accounts. You physically cannot even pay Google to access Veo from a workspace account.

Raise subscription prices, don’t deliver more value, bundle everything together so you can’t say no. I canceled a small Workspace org I use for my consulting business after the price hike last year; also migrating away everything we had on GCP. Google would have to pay me to do business with them again.

ToucanLoucan

> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.

And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.

And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.

And either way, all the people responsible for making all your technology worse every day will continue to get richer.

Peritract

> if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs

I think this is the key idea. Right now it doesn't work that well, but if it did work as advertised, that would also be bad.

Eisenstein

This is not an AI problem, this is a problem caused by extremely large piles of money. In the past two decades we have been concentrating money in the hands of people who did little more than be in the right place at the right time with a good idea and a set of technical skills, and then told them that they were geniuses who could fix human problems with technological solutions. At the same time we made it impossible to invest money safely by making the interest rate almost zero, and then continued to pass more and more tax breaks. What did we expect was going to happen? There are only so many problems that can be solved by technology that we actually need solving, or that create real value or bolster human society. We are spinning wheels just to spin them, and have given the reins to the people with not only the means and the intent to unravel society in all the worst ways, but who are also convinced that they are smarter than everyone else because they figured out how to arbitrage the temporal gap between the emergence of a capability and the realization of the damage it creates.

klabb3

Couldn’t agree more. The problem is when the party is over, and another round of centralizing wealth and power is done, we’ll be no wiser and have learnt nothing. Look at the debate today, it’s (1) people who think AI is useful, (2) people who think it’s hype and (3) people who think AI will go rogue. It’s like the bank robbers put on a TV and everyone watches it while the heist is ongoing.

Only a few bystanders seem to notice the IP theft and laundering, the adversarial content barriers to protect from scraping, the centralization of capital within the owners of frontier models, the dial-up of the already insane race to collect personal data, the flooding of every communication channel with AI slop and spam, and the inevitable impending enshittification of massive proportions.

I’ve seen the sausage get made, enough to know the game. They’re establishing new dominance hierarchies, with each iteration being more cynical and predatory, each cycle refined to optimally speedrun the rent seeking value extraction. Yes, there are still important discussions about the tech itself. But it’s the deployment that concerns everyone, not hypothetically, but right now.

Exhibit A: social media. In hindsight, what was more important: the core technologies or the business model and deployment?

ToucanLoucan

> This is not an AI problem, this is a problem caused by extremely large piles of money.

Those are two problems in this situation that are both bad for different reasons. It's bad to have all the money concentrated in the hands of a tiny number of losers (and my god are they losers) and AI as a technology is slated to, in the hands of said losers, cause mass unemployment, if they can get it working good enough to pass that very low bar.

einrealist

The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.

Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.

This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.

kgeist

We run our own LLM server at the office for a month now, as an experiment (for privacy/infosec reasons), and a single RTX 5090 is enough to serve 50 people for occasional use. We run Qwen3 32b which in some benchmarks is equivalent to GPT 4.1-mini or Gemini 2.5 Flash. The GPU allows 2 concurrent requests at the same time with 32k context each and 60 tok/s. At first I was skeptical a single GPU would be enough, but it turns out, most people don't use LLMs 24/7.

einrealist

If those smaller models are sufficient for your use cases, go for it. But for how much longer will companies release smaller models for free? They invested so much. They have to recoup that money. Much will depend on investor pressure and the financial environment (tax deductions etc).

Open Source endeavors will have a hard time to bear the resources to train models that are competitive. Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?

DebtDeflation

It's not just about smaller models. I recently bought a Macbook M4 Max with 128GB RAM. You can run surprisingly large models locally with unified memory (albeit somewhat slowly). And now AMD has brought that capability to the X86 world with Strix. But I agree that how long Google, Meta, Alibaba, etc. will continue to release open weight models is a big question. It's obviously just a catch-up strategy aimed at the moats of OpenAI and Anthropic, once they catch up the incentive disappears.

brookst

Pricing for commodities does not allow for “recouping costs”. All it takes is one company seeing models as a complementary good to their core product, worth losing money on, and nobody else can charge more.

I’d support an Apache for ML but I suspect it’s unnecessary. Look at all of the money companies spend developing Linux; it will likely be the same story.

msgodel

Even Google and Facebook are releasing distills of their models (Gemma3 is very good, competitive with qwen3 if not better sometimes.)

There are a number of reasons to do this: You want local inference, you want attention from devs and potential users etc.

Also the smaller self hostable models are where most of the improvement happens these days. Eventually they'll catch up with where the big ones are today. At this point I honestly wouldn't worry too much about "gatekeepers."

tankenmate

"Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?"

I suspect the Linux Foundation might be a more likely source considering its backers and how much those backers have provided LF by way of resources. Whether that's aligned with LF's goals ...

ben_w

> Open Source endeavors will have a hard time to bear the resources to train models that are competitive.

Perhaps, but see also SETI@home and similar @home/BOINC projects.

Gigachad

Seems like you don’t have to train from scratch. You can just distil a new model off an existing one by just buying api credits to copy the model.

null

[deleted]

pu_pe

That's really great performance! Could you share more details about the implementation (ie which quantized version of the model, how much RAM, etc.)?

kgeist

Model: Qwen3 32b

GPU: RTX 5090 (no rops missing), 32 GB VRAM

Quants: Unsloth Dynamic 2.0, it's 4-6 bits depending on the layer.

RAM is 96 GB: more RAM makes a difference even if the model fits entirely in the GPU: filesystem pages containing the model on disk are cached entirely in RAM so when you switch models (we use other models as well) the overhead of unloading/loading is 3-5 seconds.

The Key Value Cache is also quantized to 8 bit (less degrades quality considerably).

This gives you 1 generation with 64k context, or 2 concurrent generations with 32k each. Everything takes 30 GB VRAM, which also leaves some space for a Whisper speech-to-text model (turbo & quantized) running in parallel as well.

greenavocado

Qwen3 isn't good enough for programming. You need at least Deepseek V3.

PeterStuer

"how much will they charge us for prioritised access to these resources"

For the consumer side, you'll be the product, not the one paying in money just like before.

For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.

ben_w

> The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.

The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.

Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).

This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.

> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.

I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.

But I agree that the aggregation of power and centralisation of data is a pertinent risk.

jfengel

Just moments ago I noticed for the first time that Gmail was giving me a summary of email I had received.

Please don't. I am going to read this email. Adding more text just makes me read more.

I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)

shdon

How long before spam filtering is also done by an LLM and spammers or black hat hackers embed instructions into their spam mails to exploit flaws in the AI?

kldg

I am moderately hyped for AI, but I treat these corporate intrusions into my workflows the same as ads or age verification, pointing uBlock to elements which are easy to point-and-click block, and making quick browser plugins and Tampermonkey scripts for things like Google to intercept my web searches and redirect them from the All/AI search page. -And if I can, it does amuse me to have Gemini write the plugins to block Google ads/inconveniences.

bgwalter

This article is spot on. There is a small market for mediocre cheaters, for the rest of us "AI" is spam (glad that the article finally calls it out).

It is like Clippy, which no one wanted. Hopefully, like Clippy, "AI" will be scrapped at some point.

capyba

I agree with the general gist of this piece, but the awkward flow of the writing style makes me wonder if it itself was written by AI…

There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.

Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that: 1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?) 2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives

bsenftner

It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.

einrealist

Isn't "Engineering" is based on predictability, on repeatability?

LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...

If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.

So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?

oceanplexian

> LLMs are not very predictable. And that's not just true for the output.

If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.

o11c

By "unpredictability", we mean that AIs will return completely different results if a single word is changed to a close synonym, or an adverb or prepositional phrase is moved to a semantically identical location, etc. Very often this simple change will move you from "get the correct answer 90% of the time" (about the best that AIs can do) to "get the correct answer <10% of the time".

Whenever people talk about "prompt engineering", they're referring to randomly changing these kinds of things, in hopes of getting a query pattern where you get meaningful results 90% of the time.

CoastalCoder

> If you run an open source model from the same seed on the same hardware they are completely deterministic.

Are you sure of that? Parallel scatter/gather operations may still be at the mercy of scheduling variances, due to some forms of computer math not being associative.

dimitri-vs

Realistically, how many people do you think have the time, skills and hardware required to do this?

enragedcacti

Predictable does not necessarily follow from deterministic. Hash algorithms, for instance, are valuable specifically because they are both deterministic and unpredictable.

Relying on model, seed, and hardware to get "repeatable" prompts essentially reduces an LLM to a very lossy natural language decompression algorithm. What other reason would someone have for asking the same question over and over and over again with the same input? If that's a problem you need solve then you need a database, not a deterministic LLM.

mafuy

Who's saying that the model stays the same and the seed is not random for most of the companies that run AI? There is no drawback to randomness for them.

smohare

[dead]

20k

The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself

If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software

milkshakes

> The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself

conceding that this may be the case, there are entire categories of problems that i am now able to approach that i have felt discouraged from in the past. even if the code is wrong (which, for the most part, it isn't), there is a value for me to have a team of over-eager puppies fearlessly leading me into the most uninviting problems, and somehow the mess they may or may not create makes solving the problem more accessible to me. even if i have to clean up almost every aspect of their work (i usually don't), the "get your feet wet" part is often the hardest part for me, even with a design and some prototyping. i don't have this problem at work really, but for personal projects it's been much more fun to work with the robots than always bouncing around my own head.

cube2222

As with many productivity-boosting tools, it’s slower to begin with, but once you get used to it, and become “fluent”, it’s faster.

handfuloflight

This overlooks a new category of developer who operates in natural language, not in syntax.

20k

Natural language is inherently a bad programming language. No developer, even with the absolute best AI tools, can avoid understanding the code that AI generates for very long

The only way to successfully use AI is to have sufficient skill to review the code it generates for correctness - which is a problem that is at least as skilful as simply writing the code

goatlover

So they don't understand the syntax being generated for them?

add-sub-mul-div

If this nondeterministic software engineering had been invented first we'd have built statues of whoever gave us C.

mrob

>Everybody wanted the Internet.

I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.

wussboy

I’m not even sure it’s the right question. No one knew what the long term effects of the internet and mobile devices would be, so I’m not surprised people thought it was great. Cocoa leaves seemed pretty amazing at the beginning as well. But mobile devices especially have changes society and while I don’t think we can ever put the genie back in the bottle, I wish that we could. I suspect I’m not alone.

sagacity

People didn't even want mobile phones. In The Netherlands, there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.

bacchusracine

>there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.

So people didn't want to be walking around with a tether that allowed the whole world to call them where ever they were? Le Shock!

Now if they'd asked people if they'd like a small portable computer they could keep in touch with friends and read books, play games, play music and movies on where ever they went which also made phone calls. I suspect the answer might have been different.

nottorp

Actually iirc cell phone service was still expensive back in 1997. It was nice but not worth paying that much for the average person on the street.

jen729w

I’m at the point where a significant part of me wishes they hadn’t been invented.

We sat yesterday and watched a table of 4 lads drinking beer each just watch their phones. At the slightest gap in conversation, out they came.

They’re ruining human interaction. (The phone, not the beer-drinking lad.)

dataflow

Is the problem really the phone, or everything but the actual phoning capability? Mobile phones were a thing twenty years ago and I didn't recall them being pulled out at the slightest gap in the conversation. I feel like the notifications and internet access caused the change, not the phone (or SMS for that matter).

hodgesrm

Think like an engineer to solve the problem. You could start by adjusting the beer-to-lad ratio and see where that gets you.

blablabla123

As a kid I had Internet access since the early 90s. Whenever there was some actual technology to see (Internet, mobile gadgets etc.) people stood there with big eyes and forgot for a moment this was the most nerdy stuff ever

null

[deleted]

relaxing

Yes, everyone wanted the internet. It was massively hyped and the uptake was widespread and rapid.

Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.

brookst

I was there. There was massive skepticism, endless jokes about internet-enabled toasters and the uselessness and undesirability of connecting everything to the internet, people bemoaning the loss of critical skills like using library card catalogs, all the same stuff we see today.

In 20 years AI will be pervasive and nobody will remember being one of the luddites.

relaxing

I was there too. You’re forgetting internet addiction, pornography, stranger danger, hacking and cybercrime, etc.

Whether the opposition was massive or not, in proportion to the enthusiasm and optimism about the globally connected information superhighway, isn’t something I can quantify, so I’ll bow out of the conversation.

watwut

Toasters in fact dot need internet and jokes about them are entirely valid. Quite a lot of devices that dont need internet have useless internet slapped on them.

Internet of things was largely BS.

danaris

I've seen this bad take over and over again in the last few years, as a response to the public reaction to cryptocurrency, NFTs, and now generative AI.

It's bullshit.

I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.

But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)

By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.

og_kalu

>But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was,

>By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public.

It is absolutely wild how people can just ignore something staring right at them, plain as day.

ChatGPT.com is the 5 most visited site on the planet and growing. It's the fastest growing software product ever, with over 500M Weekly active users and over a billion messages per day. Just ChatGPT. This is not information that requires corporate espionage. The barest minimum effort would have shown you how blatantly false you are.

What exactly is the difference between this and a LLM hallucination ?

relaxing

US public opinion is negative on AI. It’s also negative on Google and Meta (the rest of the top 5.)

No condescension necessary.

tim333

It's annoying having AI features force fed I imagine but it's come about due to many of the public liking some AI - apparently ChatGPT now has 800 million weekly users (https://www.digitalinformationworld.com/2025/05/chatgpt-stat...) and then competing companies think they should try to keep up.

I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.

waswaswas

I hate the Google AI Overview. More of my knowledge-seeking searches than not are things that have a consequential, singular correct answer. It's hard to break the habit of reading the search AI response first, it feeling not quite right, remembering that I can't actually trust it, then skipping down to pull up a page with the actual answer. Involuntary injection of needless confusion and mental effort with every query. If I wanted a vibe-answer, I'd ask ChatGPT with my plus subscription instead of Google, because at least then I get a proper model instead of whatever junk is cheap enough for Google to auto-run on every query without a subscription.

And of course there's no way to disable it without also losing calculator, unit conversions, and other useful functionality.

timewizard

In two months they've doubled MAUs? Without an explanation of that specific outcome I don't believe it.

Also:

> As per SimilarWeb data 61.05% of ChatGPT's traffic comes from YouTube, which means from all the social media platforms YouTube viewers are the largest referral source of its user base,

That's deeply suspect.

seydor

But why are the CEOs insisting so much on AI? Because stock investors prefer to invest on anything with "AI inside". So the "AI business model" would not collapse , because it is what investors want. It is a bubble. It will be bubbly for a while, until it isn't.

PeterStuer

It is not just that. Companies that already have lots of users interacting with their platform (Microsoft, Google, Meta, Apple ...) want to capture your AI interactions to generate more training data, get insights in what you want and how you go about it, and A/B test on you. Last thing they want is someone else (Anthropic, Deepseek ...) capturing all that data on their users and improve the competition.

supersparrow

Because it can, will and has increase productivity in a lot of fields.

Of course it’s a bubble! Most new tech like this is until it gets to a point where the market is too saturated or has been monopolised.

IshKebab

Yeah literally every new tech like this has literally everyone investing in it and trying lots of silly ideas. The web, mobile apps, cryptocurrencies, doesn't mean they are fundamentally useless (though cryptocurrencies have yet to make anything successful beyond Bitcoin).

I bet if you go back to the printing press, telegraph, telephone, etc. you will find people saying "it's only a bubble!".

suddenlybananas

I don't think people had the concept of a bubble at the time of a printing press.

arnaudsm

Remembering the failure of Google+, I wonder if hostilely forcing a product to your users makes it less likely to succeed.

mat_b

Google Buzz is a better example

throwawayoldie

Was it that one or Google Wave that was supposed to become the dominant form of communication within 5 years? I don't remember much about either one.

smileysteve

Google Wave is tangential to slack, discord, Facebook groups, and Whatsapp communities, arguably reddit communities...

So they may have been on to something

daishi55

ChatGPT is the 5th most-visited website on the planet and growing quickly. that’s one of many popular products. Hardly call that unwilling. I bet only something like 8% of Instagram users say they would pay for it. Are we to take this to mean that Instagram is an unpopular product that is rbi g forced on an unwilling public?

satyrun

My 75 year old father uses Claude instead of google now for basically any search function.

All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.

Paradigma11

A friend of mine is a 65 years old philosopher who uses it to translate ancient greek texts or generate arguments between specific philosophers.

suddenlybananas

I know plenty of anti-AI people who are older and younger than their 30s.

ruszki

Nothing changing? For people who are in their 30s? Do you mean internet, mobile phones, smart phones, Google, Facebook, Instagram, WhatsApp, Reddit were already widespread in mid 90s?

Or are they the only ones who understand that the rate of real information/(spam+disinformation+misinformation+lies) is worse than ever? And that in the past 2 years, this was thanks to AI, and people who never check what garbage AI spew out? And only they are who cares to not consume the shit? Because clearly above 50, most of them were completely fine with it for decades now. Do you say that below 30 most of the people are fine to consume garbage? I mean, seeing how many young people started to deny Holocaust, I can imagine it, but I would like some hard data, and not just some AI level guesswork.

atemerev

In mid-90s, people who are now in their 30s were about 5 years old. Their formative age was from 2005 to 2015, and yes, things were staying relatively the same during this time.

croes

Isn’t it fascinating how all of a sudden we swap energy saving and data protection for convenience.

We won’t solve climate change but we will have elaborate essays why we failed.

kemotep

Would you like your Facebook feed or Twitter or even Hacker News feed inserted in between your work emails or while you are shopping for clothes on a completely different website?

If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?

esperent

I downloaded a Quordle game on Android yesterday. It pushes you to buy a premium subscription, and you know what that gets you? AI chat inside the game.

I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.

archargelod

If I want to use ChatGPT I will go and use ChatGPT myself without a middleman. I don't need every app and website to have it's own magical chat interface that is slow, undiscoverable and makes the stuff up half the time.

IshKebab

I actually quite like the AI-for-search use case. I can't load all of a company's support documents and manuals into ChatGPT easily; if they've done that for me, great!

I was searching for something on Omnissa Horizon here: https://docs.omnissa.com/

It has some kind of ChatGPT integration, and I tried it and it found the answer I was looking for straight away, after 10 minutes of googling and manual searching had failed.

Seems to be not working at the moment though :-/

nonplus

I do think Facebook and Instagram are forced on the public if they want to fully interact with their peers.

I just don't participate in discussions about Facebook marketplace links friends share, or Instagram reels my D&D groups post.

So in a sense I agree with you, forcing AI into products is similar to forcing advertising into products.

anon7000

Agreed. My mother and aunts are using ChatGPT all the time. It has really massive market penetration in a way I (a software engineer and AI skeptic/“realist”) didn’t realize. Now, do they care about meta’s AI? Idk, but they’re definitely using AI a lot

croes

It’s popular by scammers too.

I wonder how many uses of Chatgpt and such are malicious.