Skip to content(if available)orjump to list(if available)

The librarian immediately attempts to sell you a vuvuzela

karel-3d

The money situation is what gives me pause with LLMs.

The amount of money it's burned on this is giant; those companies will need to make so much money to have any possibility of return. The idea is that we will all spend more money on AI that we spend on phones, and we will spend it on those companies only... I don't know, it just doesn't add up.

As a user it's a great free ride though. Maybe there IS such a thing as a free lunch after all!

autoexec

> As a user it's a great free ride though. Maybe there IS such a thing as a free lunch after all!

if you consider the massive environmental harm AI has and continues to cause, the people whose work has been stolen to create it, the impacts on workers and salaries, and the abuses AI enables that free lunch starts looking more expensive.

umvi

> the people whose work has been stolen to create it

"Stolen" is kind of a loaded word. It implies the content was for sale and was taken without payment. I don't think anyone would accuse a person of stealing if they purchased GRRM's books, studied the prose and then used the knowledge they gained from studying to write a fanfic in the style of GRRM (or better yet, the final 2 books). What was stolen? "the prose style"? Seems too abstract. (yes, I know the counter argument is "but LLMs can do more quickly and at a much greater scale", and so forth)

I generally want less copyright, not more. I'm imagining a dystopian future where every article on the internet has an implicit huge legal contract you enter into like "you are allowed to read this article with your eyeballs only, possibly you are also allowed to copy/paste snippets with attribution, and I suppose you are allowed to parody it, but you aren't allowed to parody it with certain kinds of computer assistance such as feeding text into an LLM and asking it to mimic my style, and..."

autoexec

AI has been trained on pirated material and that would be very different from someone buying books and reading them and learning from them. Right now it's still up to the courts what counts as infringing but at this point even Disney is accusing AI of violating their copyrights https://www.nytimes.com/2025/06/11/business/media/disney-uni...

AI outputs copyrighted material: https://www.nytimes.com/interactive/2024/01/25/business/ai-i... and they can even be ranked by the extent to which they do it: https://aibusiness.com/responsible-ai/openai-s-gpt-4-is-the-...

AI is getting better at data laundering and hiding evidence of infringement, but ultimately it's collecting and regurgitating copyrighted content.

quantified

Stolen doesn't imply anything is for sale, does it? Most things that are stolen are not for sale.

strangattractor

I think there is case to be made that AI companies are taking the content - providing people with a modified version of that content and not necessarily providing references to the original material.

Much of the content that is created by people is done so to generate revenue. They are denied that revenue when people don't go to their site. One might interpret that as theft. In the case of GRRM's books - I would assumed they were purchased and the author received the revenue from the sale.

GuinansEyebrows

> It implies the content was for sale and was taken without payment

that's literally what happened in innumerable individual cases, though.

skywhopper

Yes, there are ethical differences to an individual doing things by hand, and a corporation funded by billions of investor dollars doing an automated version of that thing at many orders of magnitude in scale.

Also, LLMs don’t just imitate style, they can be made to reproduce certain content near-verbatim in a way that would be a copyright violation if done by a human being.

You can excuse it away if you want with reduction ad absurdum arguments, but the impact is distinctly different, and calls for different parameters.

JimDabell

> Using ChatGPT is not bad for the environment

https://andymasley.substack.com/p/individual-ai-use-is-not-b...

> What’s the carbon footprint of using ChatGPT?

https://www.sustainabilitybynumbers.com/p/carbon-footprint-c...

kulahan

The silver lining on this very dark cloud is that it seems to have renewed interest in nuclear power, though that was inevitable with the coming climate crisis I suppose.

yencabulator

At a time when solar & batteries were just getting great, nah.

jay_kyburz

Haha, I would have thought the reckless cuts of DOGE or willingness of the current US administration to rely on AI for decision making would have driven home exactly why governments can't be trusted to manage nuclear.

It's just too dangerous to leave it in the hands of people who don't believe in science, and value money, power, and ideology more than anything else.

Its happening now, and there is nothing to stop it happening again in future.

baggy_trough

What is this massive environmental harm? That sounds like hyperbole.

ksenzee

They’re restarting coal-fired power plants to run AI datacenters. I don’t know what your personal threshold is for “massive” environmental harm, but that meets mine.

gknoy

Training AI models uses a large amount of energy (according to what I've read / headlines I've seen /etc), and increases water usage. [0] I don't have a lot to offer as proof, merely that this is an idea that I have encountered enough that I was suprised you hadn't heard of it. I did a very cursory bit of googling, so the quality + dodginess distribution is a bit wild, but there appear to be indiustry reports [2, page 20] that support this:

""" [G]lobal data centre electricity use reached 415 TWh in 2024, or 1.5 per cent of global electricity consumption.... While these figures include all types of data centres, the growing subset of data centres focused on AI are particularly energy intensive. AI-focused data centres can consume as much electricity as aluminium smelters but are more geographically concentrated. The rapid expansion of AI is driving a significant surge in global electricity demand, posing new challenges for sustainability. Data centre electricity consumption has been growing at 12 per cent per year since 2017, outpacing total electricity consumption by a factor of four. """

The numbers are about data center power use in total, but AI seems to be one of the bigger driving forces behind that growth, so it seems plausible that there is some harm.

0: https://news.mit.edu/2025/explained-generative-ai-environmen... 1: https://www.itu.int/en/mediacentre/Pages/PR-2025-06-05-green... 2: (cf. page 20) https://www.itu.int/en/ITU-D/Environment/Pages/Publications/...

snickell

What scares me is that the obvious pool of money to fund the deficit in the cost of operating of LLMs comes from the most subtle native advertising imaginable. Can you resist ads where, say, AirBnB pays OpenAI privately to “dope” the o3 hyperspace such that AirBnB is moved imperceptibly closer to tokens like value and authentic??

How much would AirBnB pay for the intelligence everyone gets all their info from having a subtle bias like this? Sliiightly more likely to assume folks will stay in airbnbs vs a hotel when they travel, sliiightly more likely to describe the world in these terms.

How much would companies pay to directly, methodically and indetectably bias “everyone’s most frequent conversant” toward them?

john-h-k

> Can you resist ads where, say, AirBnB pays OpenAI privately to “dope” the o3 hyperspace such that AirBnB is moved imperceptibly closer to tokens like value and authentic??

This would be a very impressive technical feat

JimDabell

Anthropic demoed something similar with Golden Gate Claude a year ago:

https://www.anthropic.com/news/golden-gate-claude

sheiyei

<AIs are much better at responding to my intent, and they rarely attempt to sell me anything> YET.

Llamamoe

It's very possible that they never will, that instead the advertising will be so subtle nobody will be able to detect it. Including phrases similar to what products, brands, and their actual ads use in positive contexts, sentences that don't mention but make you think of products, being just slightly likely to bring a brand up than its competitor, and a tiny bit more critical of it, etc.

The goal isn't to have an ad->purchase, the goal is to make sure the purchase is more likely in the long term.

karaterobot

I agree they'd love to do that in theory, and it seems technically feasible. What gives me hope on that front is that marketers and advertisers (let alone the companies that pay them) have never shown the slightest capacity for that level of subtlety. The most sophisticated adtech today, produced by networks of massive data collection and analysis, ultimately just tries to shove as many loud, disruptive ads in your face as possible.

I think if you had this incredible technology that could manipulate language to nudge readers in the softest possible way toward thinking a little bit more about buying some product, so that in aggregate you'd increase sales in a measurable way that nobody would ever notice, it would just quickly just devolve into companies demanding the phrase "BUY MORE REYNOLDS GARBAGE BAGS!!!!!!!!" at least 7 times.

layer8

I’m pretty sure it would be measurable. How else would advertisers pay for it? And given that advertisers would know about it, it would also be generally known. I wager that enough people and businesses would reject it, if it isn’t outright illegal in the first place.

joegibbs

I think subliminal advertising is banned in quite a few countries - not sure about the US - so it might be a problem internationally. I know that here in Australia there was a big scare about it in the mid 2000s, some station was cutting 100ms ads into shows. Not sure about the efficacy of it though, I’m sure it would be better if you watched a whole ad.

usefulcat

> It's very possible that they never will, that instead the advertising will be so subtle nobody will be able to detect it.

I was going to write a rebuttal to this, about how more subtle forms of advertising are likely not very effective, and then I remembered subliminal advertising.

It's largely been banned (I think), but probably only because it's relatively easy to define and very easy to identify. In the case of LLMs, defining what they shouldn't be allowed to do "subliminally" will be a lot harder, and identifying it could be all but impossible without inside knowledge.

sandy_coyote

This is credible simply because this is how advertising works. Product placement, free products for celebs, modern life awash in images that make us desire things.

Lu2025

This gave me creeps. Modern tech is good an opening up new dimensions of dystopian hell.

yencabulator

That still sounds like AIs attempting to sell you something, to me.

ToucanLoucan

> It's very possible that they never will

Oh come on.

Genuinely.

Come on.

Look at every single tech innovation of the last 20 years and say that again.

pcthrowaway

Remember, Google also didn't have ads interspersed with their search results for over 1̶2̶ 2 years

Cthulhu_

And they were actually praised when they did start doing ads because they weren't as obtrusive as the existing heavy duty in-your-face Flash animations and they were relevant to a user.

It quickly turned Google into the biggest / most valuable internet company of all time ever, and it still wasn't enough for them.

I've had adblockers running for as long as I can remember so I'm blisfully unaware of how bad it is now... mostly, I don't have adblockers on my phone and some pages are unusable.

jkaptur

Google was founded in 1998 and you could buy ads on the search results page in 2000. https://googlepress.blogspot.com/2000/10/google-launches-sel...

kristianc

That wasn’t out of benevolence, that’s because they hadn’t discovered the ads business model yet. The genie is well and truly out of the bottle now.

d_phase

You obviously haven't been A/B tested yet. I got very obvious advertisements in a super simple question I asked last week to ChatGPT. The question was "When was the last year it was really smokey in Canada" it answered in one paragraph, then gave me about 6 paragraphs of ads for air purifiers, masks etc.

I'd guess we're only 6-12 months out from a full advertisement takeover.

bandoti

I think it’s time folks dust off their library cards :)

Or support an open source AI model.

I stopped using ChatGPT when it started littering my conversation with emojis. It acts like one of those overzealous kids on Barney.

sheiyei

It was a quote, which I failed to format in this app I use.

Marazan

The simpler explanation is that ChatGPT is trained on webpages that have been SEO'd to death.

So you are just getting SEO'd pages (i.e ads) regurgitated to you.

jameshart

The race right now is to get your product embedded as highly recommendable in the training data sets the AIs are learning from.

jiveturkey

I would wager that the most prevalent use of AI today is to sell you ads. Whether through market analysis, campaign analysis, content optimization, and content generation.

reverendsteveii

thank you. the idea that this will be the one thing that doesn't get enshittified when it's being so heavily pushed by the people who enshittified everything else is frankly absurd

marcosdumay

Yes, the question everybody awake is asking is how long until all the LLM corporate initiatives die? Because as useful as those things can be, they just can't do enough to justify that cost.

But there are free (to copy) ones, and smaller ones. And while those were built from the large, expensive models, it's not clear if people won't find a way to keep them sustainable. We have at minimum gained a huge body of knowledge on "how to talk like people" that will stay there forever for researchers to use.

troyvit

> We have at minimum gained a huge body of knowledge on "how to talk like people" that will stay there forever for researchers to use.

This is spot on. I think we'll be able to capitalize on other talents of "AI" once we recognize the big shift is done happening. It's like five years after the Louisiana Purchase: we have a bunch of new resources but we've barely catalogued them, let alone begun to exploit them.

> how long until all the LLM corporate initiatives die?

Sooner than I personally thought, and I place a lot of that with Apple's. They've led the way in hardware that supports LLMs, and I believe (hope?) they'll eventually wipe out most hosted chat-based products, leaving the corporate players to build APIs and embedded products for search, tech support, images, etc. The massive amounts of capital going into OpenAI, Anthropic, etc., will ebb as consumer demand falls.

I hope for this because the question I keep asking is, how can our energy infrastructure sustain the huge demand AI companies have without pushing us even further into a climate catastrophe?

chasd00

> This is spot on. I think we'll be able to capitalize on other talents of "AI" once we recognize the big shift is done happening. It's like five years after the Louisiana Purchase: we have a bunch of new resources but we've barely catalogued them, let alone begun to exploit them.

one thing about LLMs used as a replacement for search is they have to be continually retrained or else they become stale. Lets say a hard recession hits and all the AI companies go out of business but we're left with all these models on huggingface that can still be used. Then, a new programming language hits the scene and it's a massive hit, how will LLMs be able to autocomplete and add dependencies for a language they've never seen before? Maybe an analogy could be asking an LLM to translate a written language you make up on the spot to English/other language.

crazygringo

> The amount of money it's burned on this is giant

It's big, but it's honestly not that big. Most importantly, costs will quickly come down as we realize the limits of the models, the algorithms are optimized and even more-dedicated hardware is built. There's no reason to think it isn't sustainable, it will add up just fine.

But yes, it will attract a ton of advertising, the same curve every service goes through, like Google Search, YouTube, Amazon, etc. Still, just like Google and Amazon (subtly) label sponsored results, I expect LLM's to do the same. I don't think ads will be built into the main replies, because people will quickly lose trust in the results. Rather they'll be fed into a separate prompt that runs alongside the main text, or interrupts it, the way ads currently do, and with little labels indicating paid content. But the ads will likely be LLM-generated.

exceptione

  > I don't think ads will be built into the main replies, 
  > because people will quickly lose trust in the results. 

The 'best' ads will be those the public doesn't recognize. Surf the internet without an ad blocker and you will die from a heart attack. This is a matter of conditioning users. It will take some time. Case in point: people already give up on privacy because "Google knows about everything already", which reflects a normalization of abuse, as we started from trust and norms ("don't be evil").

So, can they? yes. Will they? yes.

Ferret7446

Actually, I feel like most of the money will come from enterprises. Every company will need an LLM subscription to stay competitive. I think it's possible that consumers will get a free ride with a small amount of quota without ads.

yencabulator

Product placement in movies and such has been a thing for a long time now. At best you can hope that your prompts will be classified as factual-vs-entertainment, and the product placement will only happen in the entertainment ones.

ToucanLoucan

> the same curve every service goes through

This is honestly why I struggle to get excited for anything in our industry anymore. Whatever it is it just becomes yet another fucking vector for ad people to shove yet more disposable shit in front of me and jingle it like car keys to see if I'll pull out a credit card.

The exception being the Steam Deck, though one could argue it's just a massive loss-leader for Steam itself and thus game sales (though I don't think that would hold up to scrutiny, it's pretty costly and it's not like Valve was hurting for business but anyway) but yeah. LLMs will absolutely do the exact same, and Google's now fully given up on making search even decent, replacing it with shit AI nobody asked for that will do product placements any day now, I would bet a LOT of money on it.

jonplackett

They’re all investing with the assumption they can be the ‘winner’ and take all the spoils.

Maybe nvidia can be a winner selling shovels but it seems like everyone else will just be fighting each other in the massive pit they dug.

philipwhiuk

NVIDIA is already a winner selling shovels.

They don't need a winner, they want the race to continue as long as possible.

jraph

> Maybe there IS such a thing as a free lunch after all!

A free lunch that costs our environment though, which is a big caveat :-)

tartoran

The free lunch creates a lot of dependency on AI so when the lunch isn't free anymore it will bite hard.

Ferret7446

Same as all technology. You missed the deadline to file that complaint by more than a couple hundred thousand years.

returningfory2

The idea that LLMs are uniquely bad for the environment has been debunked. https://andymasley.substack.com/p/individual-ai-use-is-not-b...

jraph

I've already seen this.

I'm not convinced. This article focuses on individual use and how inconsequential it is, but it seems like to me it dismisses the training part that it does mention a bit too fast to my taste.

> it’s a one-time cost

No, it's not. AI company constantly train new models and that's where the billions of dollars they get go into. It's only logical: they try to keep improving. What's more, the day you stop training new models, the existing models will "rot": they will keep working, but on old data, they won't be fresh anymore. the training will continue, constantly.

An awful quantity of hardware and resources are being monopolized where they could be allocated to something worthier, or just not allocated at all.

> Individuals using LLMs like ChatGPT, Claude, and Gemini collectively only account for about 3% of AI’s total energy use after amortizing the cost of training.

Yeah, we agree, running queries is comparatively cheap (still 10 times more than a regular search query though, if I'm to believe this article (and I have no reason not to)) after amortizing the cost of training. But there's no after, as we've seen.

As long as these companies are burning billions of dollars, they are burning some correlated amount of CO2.

As an individual, I don't want to signal to these companies, through my use of their LLMs, that they should keep going like this.

And as AI is more and more pervasive, we are going to start relying on it very hard, and we are also going to train models on everything, everywhere (chat messages, (video) calls, etc). The training is far from being a one shot activity and it's only going to keep increasing as long as there are rich believers willing to throw shit-tons of money into this.

Now, assuming these AIs do a good job of providing accurate answers that you don't have to spend more time on proofreading / double checking (which I'm not sure they always do), we are unfortunately not replacing the time we won by nothing. We are still in a growth economy, the time that is freed will be used to produce even more garbage, at an even faster rate.

(I don't like that last argument very much though, I'm not for keeping people busy at inefficient tasks just because, but this unfortunately needs to be taken in account - and that's, as a software developer, a harsh reality that also applies to my day to day job. As a software developer, my job is to essentially automatize tasks for people so they can have more free time because now the computers can do their work a bit more. But as a species, we've not increased our free time. We've just made it more fast-paced and stressful)

The article also mentions that there are other things to look into to improve things related to climate change, but the argument goes both ways: fighting against power hungry LLMs don't prevent you from addressing other causes.

scrollaway

Maybe.

But to be honest, optimizing monstrously slow processes that cost weeks of human labour by automating them, that saves a ton of energy as well. It’s not zero sum, as the humans spend that energy elsewhere, but ideally they spend it on more productive things.

This calculus can very quickly offset whatever energy is wasted generating cartoon images of vuvuzelas.

jraph

> optimizing monstrously slow processes that cost weeks of human labour by automating them, that saves a ton of energy as well

Yes, I do agree with this. However, that's only good as long as there wasn't a better way of optimizing them. Assuming we'd not be better off getting rid of those costly process altogether.

> ideally they spend it on more productive things

Same gotcha as mentioned in my other comment: "productive" in our growth economy often means "damaging to the environment", because we are collectively spending a lot of our time producing garbage and that's not something we should really optimize. Most of us work a fixed amount of hours so it's not like we are doing ourselves any favor by optimizing time in the end.

In another system, I wouldn't say. I'm generally for freeing up time for us so we can have better lives.

sheiyei

Humans will have the privilege of using that time to take out the trash and be taxi drivers for drug users

Xss3

Ai is in its early youtube phase. Everyone loves that it's free and adfree and that its algorithm's primary purpose is to serve relevant content not profitable content, everyone knows it can't stay that way, and we are all waiting for the enshittification to kick in on the march to profitability.

The question is, will AI chat or search ever be profitable? What enshittification will happen on that road? Will AIs be interrupting conversations to espouse their love of nordvpn or raid shadow legends?

yuck39

Many of the traditional SEO players are now figuring out how to game the system to get their customers to show up more frequently in LLM responses.

Once the pressure to turn a profit is high enough the big players surely won't just leave that money on the table.

The scary part is that even if we end up paying for "ad-free" LLM services how do we really know if it is ad-free? Traditional services are (usually) pretty clear on what is an ad and what isn't. I wouldn't necessarily know if raid shadow legends really is the greatest game of all time or if the model had been tuned to say that it is.

tartoran

I'm aware this is going to happen. but you don't think offline solutions will be more prevalent by the time openai jacks up the costs? These companies have no real moats unless they start doing something social so they have a network of captive audience or something like that.

willvarfar

My dream solution:

The EU creates an institution for public knowledge, a kind of library+tech solution. It probably funds classic libraries in member countries, but it also invests in tech. It dovetails nicely into a big push to get science to thrive in the EU etc.

The tech part makes a in-the-public-interest search engine and AI.

The techies are incentivised to try and whack-a-mole the classic SEO. E.g. they might spot pages that regurgitate, they might downscore sites that are ad-driven, they might upscore obvious sources of truth for things like government, they might downscore pages whose content changes too much etc.

And the AI part is not for product placement sale.

This would bring in a golden age of enlightenment, perhaps for - say - 20 years or so, before the inevitable erosion of base mission.

And all the strong data science types would want to work for it!

arn3n

God, I’d love to work for something like this.

The closest equivalent thing we have today is (in my mind) places like the Apache Foundation or LetsEncrypt, places that run huge chunks of open source software or critical internet structure. An “Apache for search” would be great.

tokai

No, the closest equivalent are the various national libraries.

eps

I have a friend who worked for the Apache Foundation. From what he described, it was a burecracy nightmare and advanced office politics in equal measures. He left because of that.

graemep

> The tech part makes a in-the-public-interest search engine and AI.

Which will be provided by a private sector contractor and it goes to the lowest bidder who offsets their costs with advertising.

willvarfar

hey! it's my dream, and in my dream world it would be commissioned from academia :)

beAbU

I hope the academia of your dreams create code that is not like real-world academia :)

ranyume

You make it sound like the techies will be their own boss. Sorry, but politicians are in charge.

svnt

The democratically-elected politicians are in charge of creating an environment where capitalists can capitalist without consuming everything in the process.

This is pretty much the best arrangement we have come up with so far in human civilization. You seem to be suggesting a tragedy of the commons is instead the ideal we should strive for.

SoftTalker

Sounds like Wikipedia, except for the EU ownership.

renewiltord

I have to say the best part about this fanfic is the choice of hero. It's like having Karl Marx be the protagonist of Atlas Shrugged. Highly entertaining.

boothby

> These days, I find that I am using multiple search engines and often resort to using an LLM to help me find content.

For a few months, I've been wondering: how long until advertisers get their grubby meathooks into the training data? It's trivial to add prompts encouraging product placement, but I would be completely shocked if the big players don't sell out within a year or two, and start biasing the models themselves in this way, if they haven't already.

reubenmorais

Google has been working on auctioning token-level influence during LLM generation for years now: https://research.google/blog/mechanism-design-for-large-lang...

willvarfar

And just over a year ago now the OpenAI "preferred publisher program" pitch deck to investors leaked. https://news.ycombinator.com/item?id=40310228

sph

Google: ruining their core product for that sweet ad money.

junga

Ads are Google's core product, isn't it?

Parae

Their core product is software meant to make sweet ad money.

NoMoreNicksLeft

Google's core product has always been advertisement. They sell advertisements to companies looking to advertise, and they bring in tens of billions in revenue from that business. In effect, their core product is you: they're selling your eyeballs.

If the bait that they used to bring you to them so they could sell your eyeballs has finally started to rot and stink, then why do people continue to be attracted by it? You claim they've ruined their core product, but it still works as intended, never mind that you've confused what their products actually are.

gofreddygo

> how long until advertisers get their grubby meathooks into the training data

You're so right. it's not an if anymore, but when. and when it does, you wouldn't know what's an ad and what isn't.

In recent years i started noticing a correlation between alcohol consumption and movies. I couldn't help but notice how many of the movies I've seen in the past few years promote alcohol and try to correlate it with the good times. how many of these are paid promotions? I don't know.

and now, after noticing, this every movie that involves alcohol has become distasteful for me mostly because it casts a shadow on the negative side of alcohol consumption.

I can see how ads in an LLM can go the same route, deeply embedded in the content and indistinguishable from everything else.

HSO

Ha, now try cigarettes/smoking! At least low level alcohol consumption is only detrimental to the drinker. Cigarettes start poisoning the air from the moment they are lit, and like noise pollution there is no boundary. I hate them or thrir smokers with a vengeance and the foreign satanic cabal that is „hollywood“ sold everyone out for their gold calf tobacco money

WesolyKubeczek

But a drunkard might sit behind the wheel, at which point it becomes detrimental to everyone on the road…

And there are countless books and movies where the hero has drinks, or routinely swigs some whisky-grade stuff from a flask on his belt to calm his nerves, then drives.

Lu2025

Right? A woman comes home and immediately pours herself a large glass of red wine, without even washing hands or changing into home clothes. WHO DOES THAT? Pure product placement.

null

[deleted]

suddenlybananas

I think that your negative view of alcohol is making you a bit conspiratorial. It's an extremely deeply ingrained thing in western culture, you don't need to resort to product placement to explain why filmmakers depict it. People genuinely do have a good time drinking.

aleph_minus_one

> People genuinely do have a good time drinking.

This depends a lot on the person. I, for example, would much more associate "reading scientific textbooks/papers" with having a good time. :-D

immibis

It's that way because of successful marketing - just like smoking, or cars, or fast food.

dhosek

I kind of look forward to freshman composition essays “written” with AI that are rife with appeals to use online casinos.

huskyr

Can't wait for all school essays promoting dubious crypto schemes of some sort.

nperez

I'm not going to disagree because greed knows no bounds, but that could be RIP for the enthusiast crowd's proprietary LLM use. We may not have cheap local open models that beat the SOTA, but is it possible to beat an ad-poisoned SOTA model on a consumer laptop? Maybe.

rolandog

If future LLM patterns mimic the other business models, 80% of the prompt will be spent preventing ad recommendations and the agent would in turn reluctantly respond but suggest that it is malicious to ask for that.

I'm really looking forward to something like a GNU GPT that tries to be as factual, unbiased, libre and open-source as possible (possibly built/trained with Guix OS so we can ensure byte-for-byte reproducibility).

rusk

On the flip side, there could be a cottage industry churning out models of various strains and purities.

This will distress the big players who want an open field to make money from their own adulterated inferior product so home grown LLM will probably end up being outlawed or something.

otabdeveloper4

Yes, the future is in making a plethora of hyper-specialized LLM's, not a sci-fi assistant monopoly.

E.g., I'm sure people will pay for an LLM that plays Magic the Gathering well. They don't need it to know about German poetry or Pokemon trivia.

This could probably done as LoRAs on top of existing generalist open-weight models. Envision running this locally and having hundreds of LLM "plugins", a la phone apps.

jedbrooke

not quite ads in LLMS, but I had an interesting experience with google maps the other day. the directions voice said "in 100 feet, turn left at the <Big Fast Food Chain>". Normally it would say "at the traffic light" or similar. And this wasn't some easy to miss hidden street, it was just a normal intersection. I can only hope they aren't changing the routes yet to make you drive by the highest bidder

jerf

I've had this done at a sufficient variety of different places that I don't think it's advertising.

I'm also not particularly convinced any advertisers would pay for "Hey, we're going to direct people to just drive by your establishment, in a context where they have other goals very front-and-center on their mind. We're not going to tell them about the menu or any specials or let you give any custom messages, just tell them to drive by." Advertisers would want more than just an ambient mentioning of their existence for money.

There's at least two major classes of people, which are, people who take and give directions by road names, and people who take and give directions by landmarks. In cities, landmarks are also going to generally be buildings that have businesses in them. Before the GPS era, when I had to give directions to things like my high school grad party to people who may never have been to the location it was being held in, I would always give directions in both styles, because whichever style may be dominant for you, it doesn't hurt to have the other style available to double-check the directions, especially in an era where they are non-interactive.

(Every one of us Ye Olde Fogeys have memories of trying to navigate by directions given by someone too familiar with how to get to the target location, that left out entire turns, or got street names wrong, or told you to "turn right" on to a 5-way intersection that had two rights, or told you to turn on to a road whose sign was completely obscured by trees, and all sorts of other such fun. With GPS-based directions I still occasionally make wrong turns but it's just not the same when the directions immediately update with a new route.)

jedbrooke

Landmark based directions rather than street names does seem like a plausible explanation. I still have some childhood friends whose houses I don’t know the street address but I know how to get there

I still prefer street names since those tend to be well signed (in my area anyway) and tend not to change, whereas the business on the corner might be different a few years from now.

dizhn

I am still waiting for navigation software to divert your route to make sure you see that establishment. From your experience, it seems like we're close to that reality now.

collingreen

This is devilish. I'm adding your idea to my torment nexus list.

jedbrooke

oof, I’m not sure if I’m proud or ashamed of having an idea in the “torment nexus”. I believe I heard of the idea in some of the discussion surrounding a patent from an automaker to use microphones in the car for a data source for targeted ads. Combine that with self driving cars and you could have a car that takes a sliiight detour to look at “points of interest”

isoprophlex

"Continue driving on Thisandthat Avenue, and admire the happy, handsome people you see on your right, shopping at Vuvuzelas'R'Us, your place for anything airhorn!"

carlosjobim

Most users want the best directions possible from their maps app, and that includes easily recognizable landmarks, such as fast food restaurants.

"Turn left at McDonalds" is what a normal person would say if you asked for directions in a town you don't know. Or they could say "Turn left at McFritzberger street", but what use would that be for you?

Although I've had Google Maps say "Turn right after the pharmacy", and there's three drug stores in the intersection...

shwouchk

this is already happening in full force. sota models are already poisoned. leading providers already push their own products inside webchat system prompts.

J_McQuade

"here is how to to translate this query from T-SQL to PL-SQL... ..."

"... but if you used our VC's latest beau, BozoDB, it could be written like THIS! ... ..."

9 months, max. I give it 9 months.

mike_ivanov

"T-SQL to PL-SQL" -> (implies an > 40 age, most likely being an Ask TOM citizen, a consultant with >> 100K annual income, most likely conservative, maybe family with kids, prone to anxiety/depression, etc) -> This WORRY FREE PEACE OF MIND magic pill takes America by storm, grab yours before it's too late!

Lu2025

> advertisers

This kind of ads is also impossible to filter. Everyone complains about ads on YouTube or Reddit but I never see any with my adblocks. Now we won't be able to squash them.

elif

Completely agree with this post I don't even think he's exaggerating.

I tried to search the full name of a specific roof company in my area in quotes, and they weren't in the first page of results. But I got so many disclosed and not disclosed ads for OTHER contractors.

SEO has turned search engines into a kind of quasi-mafia "protection" racket.. "oh you didn't pay your protection fee, wouldn't it be a shame if something happened to your storefront?"

WiggleGuy

I built a portal that makes it easier to query against multiple different search engines (https://allsear.ch/). It's open source, free, all that. I must say, building it really expanded my view of the internet.

I am also a heavy Kagi and Reddit user for search, and usually that's enough. But when it's not, its concerning how much better other search engines can be, especially since non-tech savvy folks will never use them.

ccvannorman

using my default browser (brave) and pressing "enter" (doing a search) did not do anything. The page just sits there.

apparently, I need to make a selection of a search engine to use this.

I would not use this as a replacement for my duckduckgo or google searches simply because of the UX of not being able to type a query and press "enter" as the default.

WiggleGuy

That's fair.

You can probably hack that experience by making use of the "rules" feature. You can have certain search engines or macros launch automatically upon pressing enter based on the content of the query. You if you set a rule to check if your search contains a vowel (which most will), it's effectively a catch all rule.

Hacky, but it will work.

BrenBarn

This is another article that, for me, sort of walks right by the answers without realizing it. As I was reading, I was thinking "does this person really not think AI is going to be flooded with ads soon enough?" Then they asked the LLM that, and it basically said yes, and then the response. . . to go "Hmmm, I wonder if that will happen"? Yes, of course it's going to happen. Imagine if this was 20 years ago wondering whether ads would infect search engines or the web would be flooded with sites which are ads masquerading as actual content. Why would we believe anything different of AI? The only way it won't happen is if we decide we don't want it, instead of accepting it as inevitable.

And well, the article is ostensibly about AI, but then at the end:

> The investors aren’t just doing this to be nice. Someone is going to expect returns on this huge gamble at some point. > ... > The LLM providers aren’t librarians providing a public service. They’re businesses that have to find a way to earn a ridiculous amount of money for a huge number of big investors, and capitalism does not have builtin morals.

Those are the things that need to change. They have nothing to do with AI. AI is a symptom of a broken socioeconomic system that allows a small (not "huge" in the scheme of things) number of people to "gamble" and then attempt to rig the table so their gamble succeeds.

AI is a cute bunny rabbit and our runaway-inequality-based socioeconomic system is the vat of toxic waste that turned that innocent little bunny into a ravening mutant. Yes, it's bad and needs to be killed, but we'll just be overrun by a million more like it if we don't find a way to lock away that toxic waste.

itzjacki

> They have nothing to do with AI.

Not inherently, but I think LLM services (and maybe other AI based stuff) are corruptible in a much more dangerous way than the things our socioeconomic system has corrupted so far.

Having companies pay to end up on the top of the search engine pile is one thing, but being able to weave commerciality into what are effectively conversations between vulnerable users and an entity they trust is a whole other level of terrible.

slfnflctd

> Imagine if this was 20 years ago wondering whether ads would infect search engines or the web would be flooded with sites which are ads masquerading as actual content

Many of us - naively, in hindsight - really did hope this wouldn't happen at the scale it did, and were appalled at how many big players actively participated in speeding up the process.

I guess it's similar to how a lot of white folks thought racism was over until Obama came along and brought the bigots out of the woodwork.

> lock away that toxic waste

The jarring conclusion I keep trying to see a way around but no longer can is that the toxic waste is part of humanity. How do we get rid of it, or lock it away? One of the oldest questions our species has ever faced. Hard not to just throw up your hands and duck back into your hidey-hole once you realize this.

BrenBarn

> Many of us - naively, in hindsight - really did hope this wouldn't happen at the scale it did, and were appalled at how many big players actively participated in speeding up the process.

Sure, maybe so. But now with hindsight we can see what happened and we should realize that it's going to happen again unless we do something.

> The jarring conclusion I keep trying to see a way around but no longer can is that the toxic waste is part of humanity. How do we get rid of it, or lock it away? One of the oldest questions our species has ever faced. Hard not to just throw up your hands and duck back into your hidey-hole once you realize this.

I think both bad and good are part of humanity. In a sense this "toxic" part is not that different from the part that leads us to, say, descend into drug addiction, steal when we think no one is looking, leave a mess for other people to clean up, etc. We can do these negative things on various scales, but when we do them on a large scale we can screw one another over quite egregiously. The unique thing about humans is our ability to intentionally leverage the good aspects of our nature to hold the bad aspects in check. We've had various ways of doing this throughout history. We just need to accept that setting rules and expectations and enforcing them to prevent bad outcomes is no less "natural" for humans than giving free rein to our more harmful urges.

Peteragain

A proposal for a solution. The data is the unique selling point. Put that in public hands with an API, published algorithms, and it's own development team. The free market can then sell user interfaces, filters, and whatever. The metaphor is roads (state managed) and vehicles (for profit). Today I can (physically) go to the British Library and get any published book, or go on line and pay for the privilege.

hennell

It's an interesting article although I think it's rather telling that the authors search of "postgres slow database" seems to disappear in the LLM section. It mentions the adds disappeared, but no mention of the solutions found or the amount of time or changes to how they searched/added the question.

I've found AI helpful for answering questions, but better at plausibly answering them, I still end up checking links to verify what was said and where it's sourced from. It saves frustration but not really time.

Workaccount2

This example falls apart because libraries are paid for by taxes.

I wish I could violently shake every internet user while yelling "If you are not paying money for it, you cannot complain about it"

The librarian is selling you a vuvuzela because that is the only way the library has been able to keep the lights on. They offered a membership but people flipped out "Libraries are free! I never had to pay in the past! How dare you try and take my money for a free service!". They tried a "Please understand the service we provide and give a donation" but less than 2% of people donated anything. Never mind that there is a backdoor that you can use, allowing you to never need to interact with a librarian while fully utilizing the libraries services (that the library still pays for).

The internet was ruined by people unwilling to pay for it. And yes, I know the internet was perfect in 1996, I have a pair of rose colored glasses too.

culebron21

Absolutely agree. Especially Google search results became generic and useless. Their Youtube search is a list of 3 generic links and then just complete junk. DuckDuckGo had been lagging behind Google for years, but around 2022 it became on par if not superior.

pajamasam

SEO spam was still easy enough to spot and skip through in search results before the masses of LLM-generated content took over.

They seem to generate extremely specific websites and content for every conceivable search phrase. I'm not even sure what their end goal is since they aren't even always riddled with affiliate links.

Sometimes I wonder if the AI companies are generating these low-quality search results to drive us to use their LLMs instead.

Retr0id

> they aren't even always riddled with affiliate links.

Presumably the goal is to build up a positive-ish reputation, before they start trying to monetize it. Or perhaps to sell the site itself for someone else to monetize, on the basis of the number of clicks it's getting per month.

gerdesj

I am deliberately keeping away from LLMs for search. I'm old enough to remember finally ditching Altavista for the new upstart Google. I did briefly flirt with Ask Jeeves but it was not good enough.

I don't think anyone has it sorted yet. LLM search will always be flawed due to being a next token guesser - it cannot be trusted for "facts". A LLM fact is not even a considered opinion, it is simply next token guessing. LLMs certainly cannot be trusted for "current affairs" - they will always be out of date, by definition (needs training)

Modern search - Goog or Bing or whatever - seem to be somewhat confused, ad riddled and stuffed with rubbish results at the top.

I've populated a uBlacklist with some popular lists and the results of my own encounters. DDG and co are mostly useful now, for me.

myself248

I miss Altavista every day. Case-sensitive search is how you tell DOS from DoS. Putting "exact phrases" in quotes no longer seems to work. Then they added insult to injury by forcing you to +mandate a term otherwise they might just ignore it. Now that no longer works either.

I've entirely given up on Google.

I've made extensive shortcuts so I can directly search various sites straight from my location bar: wikipedia, wiktionary, urbandictionary, genius, imdb, onelook, knowyourmeme, and about two dozen suppliers/distributors/retailers where I regularly shop.

If I need something that's not on that list, I'll try some search engines but I start with the assumption that I'm not going to find it, because the battle for search is lost.

SoftTalker

> I've entirely given up on Google.

I have used Google very little for about 3 years now. Sometimes when DDG fails to find what I'm looking for I'll try Google. It rarely works better.

spauldo

It's really strange, while I agree Google's results aren't as good as they used to be, they're still miles ahead of DDG for me. Is it because I still use keyword search like it's the early 2000s?

I tried to switch to DDG because Google was blocking Hurricane Electric IPv6 tunnels. DDG is still my homepage but I usually end up clicking the bookmark I made for ipv4.google.com. I wish I knew why DDG works for all you people but it's horrible for me.

jononor

Does you Google actually respect the keywords? For me, most of the times it replaces words with "synonyms" (mostly wrong context or not really replaceable). And results are pretty crap as a result - no what I was looking for, but just much more common/generic stuff.

cheschire

Isn’t DDG basically Bing with a privacy layer?

sgarland

Altavista was the OG. I remember it being cantankerous and requiring you to specific in how you searched, but if you knew how to use it, it was unmatched. Until Google.

devilbunny

It was fast, which almost nothing else was at the time.

And if people on dialup connections think you’re slow, it’s because you are.

ars

When Google came out it was way better than Altavista, people switched instantly. Specifically Altavista looked at how often a search term was in the result, which wasn't always a helpful thing. Google also noticed if search terms were near each other in a page which was really helpful, otherwise you would get forums with one search term in one message, and the other far away in an unrelated message. Google fixed that.

The web has changed these days, it's an adversarial system now, where web results are aggressively bad and constantly trying to trick you. Google is much harder to implement now.

Lu2025

Correct. Google became unusable around 2020. I search Wikipedia directly and rely on Duck for other needs. As rudimentary as it is for uncommon languages such as Ukrainian, DDG it's still better than Google. Shame on them.

username223

> Putting "exact phrases" in quotes no longer seems to work. Then they added insult to injury by forcing you to +mandate a term otherwise they might just ignore it. Now that no longer works either.

I don't understand why they got rid of these escape hatches. Sometimes I want the "top" pages containing precisely the text I enter -- no stemming, synonyms, etc. Maybe it shouldn't be the default, but why make it impossible?

In my ideal search world, there would also be an option to eliminate any page with a display ad or affiliate link. Sometimes I only want the pages that aren't trying to make money off of me.

rustcleaner

I have a solution: search engine which uses machine learning to score the "commercialness" of a page. By commercialness, I mean: is it a table of products with prices; does it have buy buttons; does it use a lot of tracking and analytics; does it have a cart; is there a lot of product talk (and is it overbiased positively); how are all the pages within a couple link-degrees scoring; ... (and more). Then, give users a slider which right side means no filtering, left side means basically only return universities, Wikipedia, and PBS tier results.

This has to track number of ads and trackers in a page and not just be about product pages. This measure should also fight SEO spam, as the tracking and advertising elements would cause SEO spammers to lose rank on the engine (disincentivising an arms race).

Add in the patently obvious need for the poweruser's 2nd search bar, which takes set notation statements and at least one of a few popular powerful regex languages, and finally add cookie stored, user-suppliable domain blacklists and whitelists (which can be downloaded as a .txt and reuploaded later on a new browser profile if needed). I never ever want to see Experts Exchange for any reason in my results, as an immediately grasped example. Give the users more control, quit automagicking everything behind a conversationally universal idiot-bar!

shpx

If you ask ChatGPT 4o about a current event it will google things (do some sort of web search) and summarise the result.

vintermann

and often I have to tell it to don't search, because it will just pull SEO polluted answers from Google and launder then slightly.