Skip to content(if available)orjump to list(if available)

The trust collapse: Infinite AI content is awful

everdrive

Doesn't matter. We must keep building more and more technology no matter the cost. Have an idea for a business? Build it. Does your business make the lives of people worse? Doesn't matter, keep pushing. Could some new technology ruin the lives and relationships that people have? Doesn't matter, just build it. We always need more, need to do more. Every experiment is valid, every impulse must be followed. More complexity, more control, more distraction, more outrage, more engagement. Just keep building forever no matter the cost.

drakythe

Turns out the Torment Nexus was just democratizing Venture Capital's desire for infinite growth.

wartywhoa23

Yours truly,

Larry Fink and The Money Owners.

throwmeaway307

move fast and break things!

nevermind if the things are people or their lives!!

arnon

build things """"people"""" want

TheOtherHobbes

People want dopamine hints, gamification, addictive distractions, and a culture of competitive perma-hustle.

If they didn't, we wouldn't be having these problems.

The problem isn't AI, it's how marketing has eaten everything.

So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."

You can't "build trust" if your primary motivation is to sell stuff to your contacts.

The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.

ricogallo

It sounds like the "The City" in "Blame!"

ChrisMarshallNY

> Will you still be here in 12 months when I’ve integrated your tool into my workflow?

This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.

AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).

So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.

stevetron

You can't build trust in your OS (operating system) when your OS spies on the entire customer base, and you spin it off as telemetry. Or you remotely target the OS to implement a radical change, and force it to be installed as an 'update'.

I stopped accepting telephone calls before 2010. They still ring the phone.

alexpotato

Yuval Noah Harari, Sapiens fame [0], has a great quote (paraphrasing):

Interviewer: How will humans deal with the avalanche of fake information that AI could bring?

YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.

In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).

In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc

0 - https://amzn.to/4nFuG7C

pron

The problem is that too many people just don't know how to weigh different probabilities of correctness against each other. The NYT is wrong 5% of the time - I'll believe this random person I just saw on TikTok because I've never heard of them ever being wrong; I've heard many stories about doctors being wrong - I'll listen to RFK; scientific models could be wrong, so I'll bet on climate change being not real etc.

ajaisjsbbz

Trust is much more nuanced than N% wrong. You have to consider circumstantial factors as well. ie who runs The NY Times, who gives them money, what was the reason they were wrong, even if they’re not wrong what information are they leaving out. The list goes on. No single metric can capture this effectively.

Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.

It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.

Trust is hard earned and easily lost.

nsxwolf

COVID ended my trust in media. I went from healthy skepticism to assuming everything is wrong/a lie. There was no accountability for this so this will never change for me. I am like the people who lived through the Great Depression not trusting banks 60 years later and keeping their money under the mattress.

wartywhoa23

> by building institutions we trust to provide accurate information

Except those institutions have long lost all credibility themselves.

lazide

This was done intentionally, over decades, to try to push ‘trust’ closer to where it can be controlled. Religion, family ties (through propaganda), etc.

myth_drannon

Unfortunately that's not what happens. BBC, Al-Jazeera, RT, CBC are all propaganda sources and are not sources of information. The other family members will get the information from those sources so family will not be trusted as well. And the sources I consider as trustfull, my opinion of them most likely skewed by my bias and others will consider it propaganda as well.

AtlasBarfed

The New York Times.

Wall Street, financier centric and biased in general. Very pro oligarchy.

The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.

tetris11

I needed to get some builder quotes for my home. It did not enter my mind to go online to search for any.

I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.

(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)

Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service

miloignis

When we needed some work done, we asked family and friends too, and ended up with a cowboy. When the work needed to be re-done, we looked up local reviews for contractors, and ended up with someone who was more expensive but also much more competent, and the work was done to a higher standard.

bee_rider

What I get from the article is that, proving that a company will stick around for a while after you’ve subscribed is hard now, because anybody can AI generate the general vibe of the marketing department of a big established player. This seems like it’ll be devastating for companies whose business model requires signing new users up for ongoing subscriptions.

Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?

I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.

arnon

that's exactly my point - yes

you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.

gnarlouse

> 3 stanford dropouts in a trenchcoat

I'm using this

null

[deleted]

gnarlouse

We already got your money, what do we need to work for again?

Chinjut

AI-esque blog post about how infinite AI content is awful, from "a co-founder at Paid, which is the first and only monetization and billing system for AI Agents".

arnon

So I can't have opinions?

Also, this is entirely hand-written ;)

ambicapter

The authentic nature of opinions means that sometimes they suck. Maybe GP is commenting that your opinion sucks?

dkdcio

this is a funny phenomenon that I keep seeing. I think people are going through the reactionary “YoU mUsT hAvE wRiTtEn ThIs oN a CuRsEd TyPwRiTeR instead of handwriting your letter!1!!”

hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)

titanomachy

This didn’t seem AI-generated to me, although it follows the LinkedIn pattern of “single punchy sentence per paragraph”. LinkedIn people wrote like this long before LLMs.

I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.

arnon

You're right - it isn't

This is just how I write in the last few years

realitydrift

What you’re describing is basically the Drift Principle. Once a system optimizes faster than it can preserve context, fidelity is the first thing to go. AI made the cost of content and the cost of looking credible basically zero, so everything converges into the same synthetic pattern.

That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.

adammarples

I think this "drift principle" you're pushing is just called bias or overfitting. We've overfit to engagement in social media and missed the bigger picture, we've overfit to plausible language in LLMs and missed a lot.

piker

The observations in this article about the insane signal-to-noise ratio are valid.

huijzer

Yes nice article. Interesting point.

One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.

arnon

I guess that's true for some but not for all. I wouldn't say that's the most common scenario

pessimizer

Nothing safer than financial scams in the West these days. Never short Herbalife.

Applejinx

I'm already seeing this. I very much fall into the category of 'delete all email offers' as I'm a small youtuber, big enough to be targeted by AI sponsor deals, so I'm just buried with it.

The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.

Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.

By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.

One thing about it, it's a very modern sort of dystopia!

renegat0x0

I use RSS to follow these I want to hear. I do not follow recommendation, since RSS reader is my window to the world.

I follow even AI slop via reddit RSS.

I control however what comes in.

erpigna

I'm looking for a practical and possibly Open Source way to filter noise from RSS feeds, have you got any recommendation?