Skip to content(if available)orjump to list(if available)

AI slop, suspicion, and writing back

AI slop, suspicion, and writing back

174 comments

·January 26, 2025

leni536

My gripes with AI slop:

* Insincerity. I would prefer a disclaimer that you posted AI generated content than presenting it as your own words.

* Imbalance of efforts. If you didn't take the effort to write to me with your own words then you are robbing me of my efforts of reading what your AI assistant wrote.

* We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.

Notice that the quality of the AI output is mostly irrelevant for these points. If you have good quality AI outputs then you are still welcome to share it with me, given that you are upfront that it is AI generated.

Cpoll

> We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.

With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code.

With that said, I agree with your points.

joe_the_user

I don't think AI output on some factual topic is comparable to distinct things written with IDEs.

On a given topic, I have always found that AI comes to the point of the average talking points of that topic and you really can't cleverly get more out of it because that's all that it "knows" (ie, push back gets either variations on a theme or hallucinations). And this is logical a given method is "average expected reply".

jdietrich

Genericness is overwhelmingly a product of RLHF rather than an innate property of LLMs. A lot of manual fine-tuning has gone into ChatGPT and Gemini to make them capable of churning out homework and marketing blogs without ever saying anything offensive.

If you make requests to the Sonnet 3.5 or DeepSeek-R1 APIs and turn up the temperature a little bit, you will get radically more interesting outputs.

majormajor

> With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code.

This is also true if you don't have "AI" but are simply reading sources yourself.

Is AI going to help you realize you need to push back on something you wouldn't have pushed back on without it?

jdietrich

Claude makes a genuine effort to encourage the user to push back. The reason for this becomes apparent when you look at the system prompts:

"Claude is intellectually curious. It enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics."

"Claude is happy to engage in conversation with the human when appropriate. Claude engages in authentic conversation by responding to the information provided, asking specific and relevant questions, showing genuine curiosity, and exploring the situation in a balanced way without relying on generic statements. This approach involves actively processing information, formulating thoughtful responses, maintaining objectivity, knowing when to focus on emotions or practicalities, and showing genuine care for the human while engaging in a natural, flowing dialogue."

https://docs.anthropic.com/en/release-notes/system-prompts#n...

therein

> With AI, the insights (or "insights") depend on what questions you ask

Which is an interesting place to put the human. You can be fooled to think that your question was unique and special just because it led some blackbox to generate slop that looks like it has insights.

This explains why we have people proudly coming in and posting the output they got their favorite blackbox to generate.

yapyap

> With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code

Yeah no, an AI is not gonna give you a brilliant answer cause you wrote such a brilliant prompt, you just wrote a different question and got a different answer. Like if I type something into google I don’t get the same result as when you type something into google, why? cause we’re not asking the same damn questions.

lrae

While I also agree with the sentiment that it's not the same, I think it's interesting that you use "googling" as a comparison.

Googling and extracting the right information efficiently is clearly a skill, and people do use it in wildly (and often inefficient/bad) ways. That might be less of an issue with your average HN user, but in the real world, people are bad at using Google.

lupusreal

Not knowing exactly how/what the other person asked their AI is one of the reasons I downvoted all AI slop, even those disclosed as AI generated. Asking in different ways can often generate radically different answers. Even if the prompt is disclosed, how do I know that was the real prompt? I would have to go interrogate the AI myself to see if I get something similar, as well as formulate my own prompts from different angles to see how much the answers change. And if I have to put in all that effort myself, then what is the value of the original slop post?

null

[deleted]

brookst

I empathize with the obsession (we all have some obsessive behaviors we’re not thrilled with) but I question the utility.

It feels like some kind of negative appeal to authority: if the words were touched by an AI, they are less credible, and therefore it pays to detect AI as part of a heuristic to determine quality.

But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…

IMO human content is so variable in quality that it is incumbent on readers to evaluate based on content, not provenance. Using an author’s tools, or ethnicity, or sociowhatever as a proxy for quality doesn’t seem healthy or productive at all.

tarkin2

I would rather see the errors a non naive speaker would make rather than wading though grammatically correct but generic, meaningless generated business speak in an attempt to extract meaning. When you sound like everyone else you sound like you have nothing new to say, a linguistic soviet union: bland, dull, depressing.

I think there's a bigger point about coming across as linguistically lazy--copying and pasting text without critiquing it akin to copying and pasting a stackoverflow answer--which gives rise to possibly unfair intellectual assumptions.

ewoodrich

Your comment reminded me of an account I saw in a niche Reddit sub for an e-reader brand that posted gigantic 8 paragraph "reviews" or "feedback for the manufacturer" with bullet points and a summary paragraph of all the previous feedback at the end.

They always had a few useful observations but it required wading through an entire monitor's worth of filler garbage that completely devalued the time/benefit of reading through something with that low of information density.

It was sad because they clearly were very knowledgeable but their insight was ruined by prompting ChatGPT with something like "Write a detailed, well formatted formal letter addressed to Manufacturer X" that was completely unnecessary in a public forum.

mihaic

I feel the need to paraphrase the Ikea scene in Fight Club: "sentences with tiny errors and imperfections, proof that they were made by the honest, simple, hardworking people of... whereever"

Kiro

Non native speakers may not want to make errors. I want to post grammatically correct comments. This is even more true for texts that have my real name. It's not just about the receiver.

computerthings

My boundaries are absolutely only about me. Using spell check is one thing, but if you outright can't write without using an LLM prompt then no, I don't want to read it thinking a person wrote it. If that doesn't catch on, I'd sooner move to a whitelist approach or stop reading altogether than be forced to read it.

mort96

If non-native speakers (including myself, fwiw) want to post grammatically correct comments, there's a fairly straightforward solution: learn grammar and use a spell/grammar checker. Have the courage to write your own words and the decency to spare the rest of us from slop.

piker

Then either you edit the results as suggested in TFA or those comments are in fact not yours. Grammatically correct or otherwise.

vunderba

I tentatively agree - if the core idea buried within the text is unique enough then I'm not sure I care how much the text has been laundered. But that's a big IF.

bostik

Not quality. Accountability.

I work in (okay, adjacent to) finance. Any communications that are sent / made available to people outside your own organisation are subject to being interpreted as legally binding to various degrees. Provenance of any piece of text/diagram is vitally important.

Let's pair this with a real life example: Google's Gemini sales team haven't understood the above. Their splashy sales pitch for using Gemini as part of someone's workflow is that it can autogenerate document sections and slide decks. The idea of annotating sections based on whether they were written by a human or an unaccountable tool appeared entirely foreign to them.

(The irony is that Google would be particularly well placed to have such annotations. Considering the underlying data structures are CRDTs, and they already show who made any given edit, including an annotation whether the piece of content came from a human or bot should be relatively easy.)

dale_glass

I don't understand this argument. There is accountability: the user or management is always possible to blame.

Say one of my tasks is writing a document, I use a LLM and it tells people to eat rat poison.

But I'm accountable to my boss. My boss doesn't care a LLM did it, my boss cares I submitted something that horrible as completed work.

And if my boss lets that through then my boss is accountable to their boss.

And if my company posts that on the website, then my company is accountable to the world.

Annotations would be useful, sure. But I don't think for one minute they'd release you from any liability. Maybe they don't make it into the final PDF. Or maybe not everyone understands what they're supposed to take away from them. You post it, you'll be held responsible.

bostik

Hm, we may be using the word in slightly different tones then. For me accountability is more than just appointing blame, it's also about how you got to the result you brought out.

On the other hand, I absolutely agree with this:

> And if my company posts that on the website, then my company is accountable to the world.

I take pride in having my name associated with material we post publicly. It doesn't make my employer any less involved in it, but it does mean we both put out necks out. The company figuratively, and me personally.

A4ET8a8uTh0_v2

<< my boss cares I submitted something that horrible as completed work.

Bosses come in many shapes and sizes. That said, some of the bosses I had usually wanted it all ( as in: LLM speed, human insights, easy to read format, but also good and complete artifact for auditors ). And they tended to demand it all ( think Musk ) as a way of managing, because they think it helps people work at their highest potential.

In those instances, something has got to give.

BlueTemplar

Ideally, yes, sadly examples abound with excuses like "the machine did it" or "the machine doesn't seem to allow me to do what you are asking for due to either my own incompetence, those of engineers that fabricated it it, or my organization's policy, so I'm going to pretend it's impossible (even though it would be possible to do it by hand)".

yuliyp

One issue is that AI skews the costs paid by the parties of the communication. If someone wrote something and then I read it, the effort I took to read and comprehend it is probably lower than the author had to exert to create it.

On the other hand, with AI slop, the cost to read and evaluate is greater than the cost to create, meaning that my attention can be easily DoSed by bad actors.

codetrotter

computerthings

That would be the best case outcome for some, and even that is a horribly bad outcome. But the vast majority of people would get DDOSed, scammed, misled by politicians and political actors etc. The erosion of trust just by humans being intellectually dishonest and tribal is already bearing really dark fruit.. covering the globe in LLM slop on top of that will predictably make it much worse.

ipdashc

The bizarre part is the first panel in the comic! I'm not sure where people get the idea that they need to fluff up their emails or publications. It exists, sure, I'm just saying I've never felt the need to do it, nor have I ever (consciously, of course) valued a piece of text more because it was more fluffy and verbose. I do have a bad habit of writing over-verbosely myself (I'm doing it now!), but it's a flaw I indulge in on my own keyboard. I use LLMs plenty often, but I've never felt the need to ask one to fluff up and sloppify my writing for me.

But I really want to know where the idea that fluffier text = better (or more professional?) comes from. We have plenty of examples of how actual high-up business people communicate, it's generally quick and concise, not paragraphs of prose.

Even from marketing/salespeople, I generally value the efficient and concise emails way more than the ones full of paragraphs. Maybe this is an effect of the LLM era, but I feel like it was true before it, too.

memhole

This is partly what left me to leave a job. Coworkers would send me their AI slop expecting me to review it. Management didn’t care as it checked the box. The deluge of information and ease to create it is what’s made me far more sympathetic to regulation.

Sharlin

Which is exactly the same problem as with spam.

tdeck

> But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…

All of these could apply to those YouTube videos that have synthesized speech, but I'll bet most of us click away immediately when we find the video we opened is one of those.

vunderba

Agreed. Same reason I don't envision TTS podcasts taking off any time soon - the lack of authenticity is a real turn off.

Kiro

No, we clearly don't. They remain very popular.

JTyQZSnP3cQGa8B

> what if the writer just isn’t a native speaker of your language [...] evaluate based on content

Evaluate as in "monetize" everything and that's how we ended up in this commercialized internet. The old web was about diversity and meeting new people all over the world. I don't care about grammar mistakes, it makes us human.

codetrotter

I find grammatical mistakes in non-native speakers endearing. Either when they speak English and are non-native speakers of English (I am too), or when they speak my native language and they are not native speakers of mine.

Especially when it’s apparent that it comes from how you would phrase something in the original language of the person speaking/writing.

Or as one might say: Especially when it is visible that it comes of how one would say something on mother’s language to the person that speaks or writes.

knightscoop

I think the author does cover their bases there:

> To be clear, I fault no one for augmenting their writing with LLMs. I do it. A lot now. It’s a great breaker of writers block. But I really do judge those who copy/paste directly from an LLM into a human-space text arena.

When writing in my second language, I am leaning very heavily on AI to generate plausible writing based on an outline, after which I extensively tweak things (often by adversarial discussion with ChatGPT). It scares me that someone will see it as AI slop though, especially if the original premise of my writing was flimsy...

adeon

I hope the article didn't make you feel bad and discourage you from writing. IMO what you are doing is not slop, and the author saying "I really do judge those who copy/paste directly from an LLN to human-space text arena" is a pretty shallow judgement if taken at face value so I'm hoping it was just some clumsy wording on their part.

---

When the AI hype started and companies started shoving it the throats of everyone, I also developed this intense reflex of a negative reaction to seeing LLM-text, much like how the author said on the first paragraph. So much crappy start-ups and grifters, which I think I saw a lot because I frequented /r/localllama Reddit and generally followed LLM-related news so I got exposed to the crap.

Even today I still get that negative reaction from seeing obvious LLM-text but it's much a weaker reaction now than it used to be, and I'm hoping it'll go away entirely soon.

The reason I want to change: my attitude changed when I heard a lot more use cases kinda like you describe, people who really could use the help from an LLM. Maybe you aren't good with the language. Maybe you are insecure about your own ability to write. Maybe you aren't creative or articulate and you want to communicate your message better. Maybe you have 8 children and your life is a chaos, but you actually need to write something regularly and ChatGPT cuts out that time a lot. Maybe your fingers physically hurt and you have a disability and you can't type well. Maybe you have a mental or a brain problem and you can't focus or remember things or dyslexia or whatever. Maybe you are used to Google searching and now think Google results are kinda shit these days and a modern LLM is usually correct enough that it's just more practical to use. Probably way more examples I can't think of.

None of these uses are "slop" to me, but can result in text that looks like slop to people, because it might have easily recognizable ChatGPT-like tone. If you get judged over using AI as a helping tool (and you are not scamming/grifing/etc.), then judge them back for judging you ;)

Also, I'm not sure the definition of "slop" has an exactly agreed upon definition. I think of it as low-effort AI garbage, basically a use of LLMs as a misdirection. Basically the same as "spam" but maybe with a nuance that now it's LLM-powered. Makes you waste time. Or tries to scam or trick you. I don't have a coherent definition myself. The author has a definition near top of the page that seems reasonable but the rest of the article didn't feel like it actually followed the spirit of said definition (like the judging copy/paste part).

To give the author good faith: I think they maybe wrote thinking of a reader audience of proficiently English-speaking writers with no impediments to writing. Like assuming everyone knows how or can "fix" the LLM text with their own personal touch or whatever. Not sure. I can't read their mind.

I have a hope, that genuine slop continues to be recognizable: even if I get 10000x smarter LLM right now, ChatGPT-9000, can it really do much if I, as its user, continue to ask it to make crappy SEO pages or misleading Amazon product pages? The tone of the language with LLMs might get more convincing, but savvy humans should till be able to read reviews, realize a SEO page has no substance, etc. regardless how immaculate the writing itself is.

Tl;dr; keep writing, and keep making use of AI, I hope reading that sentence didn't actually affect you.

alkonaut

False positives aren’t a big problem. There’s more content than I have time to read and my tolerance for reading anything generated is zero. So it’s better to label too much human content as generated and risk ignoring something insightful and human generated.

krisoft

> False positives aren’t a big problem.

You will think that until something your wrote with your own mind and hands is falsely accused of being AI generated.

“Sorry alkonaut, your account has been suspended due to suspicious activity.”

“We have chatgpt too alkonaut! No need to copy paste it for us”

“It is my sad duty to inform you that we have reasons to believe that you have commited academic misconduct. As such we have suspended your maintenance grant, and you will be removed from the university register.”

alkonaut

False positives must be zero in that context. Not when I choose which blog posts to spend 30 minutes on. Quite different.

BlueTemplar

Depending on the subfield it might not be true. It's also quite disheartening to find yourself in a social space where you realize that you are almost the only one human left (happened to me twice already).

p0w3n3d

Content written by non-native English speaker will have some errors (usually). Content generated by ChatGPT4 will have no errors but will give feeling as if the person who was writing was compelled to puke more and more words

ggm

Liked the article. Have been accused here and elsewhere of being AI. The increase in slop is probably going to be mirrored by an increase in the accusations.

albert_e

> Content that is mostly-or-completely AI-generated that is passed off as being written by a human, regardless of quality.

I think something does not necessarily need to be "passed off as being written by a human" -- whether covertly, implicitly or explicitly -- to qyalify as AI slop.

There are ample examples of content sites and news articles etc that shamelessly post AI generated content -- without actively trying to claim it as human generated. Some sites may even have a disclaimer that they might sometimes use AI tools.

Still slop is slop becuase we are subjected to it and have to be wary of it, filter through it to separate low quality low effort content, expend mental energy in all this while feeling powerless etc.

JohnMakin

Respectfully, I disagree with some of the conclusions, but agree with the observations.

It seems, to me, it seems obvious the slop started ingesting itself and regurgitating and degrading in certain spaces. Linkedin has particularly been very funny to watch, that was very true. However - the gold mine companies that host spaces like this are realizing they’re sitting on isn’t in invasive user data manipulation, which they’ll do anyway, but in high quality tokens to feed back into the monster that devoured the entire internet. There’s such a clear obvious difference in training data quality from scraped content online depending on how bad the bot problem is.

So, all this to say, if you’re writing well, dont give it out for free. I’m trying to create a space where people can gather like the RSS feed mentioned, but where they own their own writing, and can profit off of it if they want to opt in to letting it be trained. It sounds a lot easier than it is, the problem is a little weird.

the weirdest thing to me lately, is that bad writing with lots of typos tends to get promoted more because, i think of the naive assumption it’s more likely to be a real “human,” kind of like a reverse reverse turing test. utterly bizarre.

teraflop

> I’m trying to create a space where people can gather like the RSS feed mentioned, but where they own their own writing, and can profit off of it if they want to opt in to letting it be trained. It sounds a lot easier than it is, the problem is a little weird.

I mean, maybe I'm just defeatist, but it sounds near-impossible to me. The companies that train AI models have already shown that they don't give a damn about creator rights or preferences. They will happily train on your content regardless of whether you've opted in.

So the only way to build a "space" that prevents this is by making it a walled garden that keeps unauthorized crawlers out entirely. But how do you do that while still allowing humans in? The whole problem is that bots have gotten good enough at (coarsely) impersonating humans that it's extremely difficult to filter them out at scale. And as soon as even one crawler manages to scrape your site, the cat's out of the bag.

You can certainly tell people that they own their content on a given platform, but how can you hope to enforce that?

Retric

The counter is poisoning the well of training data not trying to hide it.

Crawling the web is cheap. Finding hidden land mines in oceans of data can be next to impossible because a person can tell if something isn’t being crawled but they can’t inspect even a tiny fraction of what’s being ingested.

JohnMakin

You’re getting onto what I am saying a little, I think. You want to scrape my data? and I can prove that you did? The way the legislation is going in certain areas, I’m pretty sure there will be a crackdown. I am pretty sure a sufficiently large userbase could mess something up for a scraper. I think, anecdotally, we’re seeing evidence of this type of warfare already. And yea, the challenge is not letting bots in. but then you don’t even have to worry about that so much as if the data can be shown to manipulated and twisted to an agents nefarious interests, whatever they may be, you’re gonna get a flood of users that look and act seemingly real but aren’t.

It’s an interesting problem I think is solvable, traction is one issue, and then, building a product appealing enough for people to feel comfortable they’re not being exploited.

like, if you want this to work properly, you have to shut it down from every other part of the internet that can become bothersome with bot behavior. Like, federated logins, social media, secure proxies, etc. Nothing touches it. Treat it like the blackwall in cyberpunk (actually what inspired me). I would pay for this. Like a lot for it. but, that is a difficult sell because to migrate off these apps requires legit lifestyle changes, and people (rightfully) want both.

I get worked up sometimes on the topic because while I am dubious but sometimes wrong about AI capabilities, but if I believe some of what is said at face value, I do strongly believe a day is coming and may be here that you will have zero guarantee of someone you are talking to is a bot, or an ai, or even a video/voice like agent based on a real person - that future is a destroyed internet. I think people should probably get around to thinking of what a disaster that would be.

pockmarked19

Why care whether something is AI slop or human slop? It’s not worth reading in either case.

The arguments presented here look suspiciously like the arguments scribe classes of old used against the masses learning to read and write.

Seems like we’ve gotten to the point where sloppy writers are worse than LLMs and assume that all “meticulous” writers are LLMs. The only convincing “tell” I have ever heard tell of: characters such as smart quotes, but even those can just be a result of writing in a non-standard or “fancy” editor first. I’ve even seen people say that em dashes are indicative, I guess those people neither care about good writing nor know that em dashes are as easy as option + shift + hyphen on a Mac.

vunderba

Because AI slop can be generated in massive quantities that dwarf anything prior in history. Sifting through these bales of hay is honestly exhausting.

nikau

> Why care whether something is AI slop or human slop? It’s not worth reading in either case.

The problem is human slop is typically easy to detect by grammar and other clues.

AI slop is often confidently incorrect.

xmprt

> human slop is typically easy to detect by grammar and other clues

I'm not sure this is true. There have been a lot of times where I see a very well made video or article about some interesting topic but when I go to the comments I end up finding some corrections or realizing that the entire premise of the content was poorly researched yet well put together.

budro

Most of the human-made slop you'll see is going to be, at least on its surface, high quality and well produced since that's what goes viral and gets shared. To that end, I agree with you.

It is worth noting though that the other 99% of human-made slop doesn't make it to you since it never gets popular, hence why hard to filter human-made slop can seem over-represented just through survivorship bias.

nikau

Hence the word "typically".

I am noticing it in work emails a lot now - poor performers cutting and pasting chatgpt content with no validation.

tokioyoyo

Human slop is still written by humans, implying there was effort and labour. Not sure how to put it in words properly, just "the vibes" are different.

brookst

Nit: em dashes also appear when you type two dashes in a row in word or mostly any other MS product. That’s a terrible heuristic.

BrouteMinou

I am a proof of that.

I write in word to correct my text when I use my pc. Additionally, it's a better editor than the one supplied for commenting...

voidhorse

> Why care whether something is AI slop or human slop? It’s not worth reading in either case.

That's not always true, and this is one of the fundamental points of human communication that all the people pushing for AI as a comms tool miss.

The act of human communication is highly dependent on the social relationships between humans. My neighbor might be incapable of producing any writing that isn't slop, but it's still worth reading and interpreting because it might convey some important beliefs that alter my relationship with my neighbor.

The problem is, if my neighbor doesn't write anything other than a one sentence prompt, doesn't critically examine the output before giving to me, it violates one of the basic purposes of human to human communication—it is effectively disingenuous communication. It flies in the face of those key assumptions of rational conversation outlined by Habermas.

oneeyedpigeon

I'm pretty sure that anyone saying "emdashes are a tell" doesn't mean the literal character, but also the double-hyphen or even single hyphen people often use in its place.

persnickety

<Compose>--. on Linux.

aragilar

That's an en-dash, not a em-dash. – vs —

persnickety

Oops. Then it must be <Compose>---

pona-a

Hyper + Shift + -

Nullabillity

> I’ve even seen people say that em dashes are indicative, I guess those people neither care about good writing nor know that em dashes are as easy as option + shift + hyphen on a Mac.

They are virtually indistinguishable from regular dashes (unless you're specifically looking for them), and contribute nothing of significant value to the text itself. They were only ever a marker of "this is either professionally edited, or written by a pedant".

null

[deleted]

bsnnkv

> Undoubtedly, the sloppification of the internet will likely get worse over the next few years. And as such, the returns to curating quality sources of content will only increase. My advice? Use an RSS feed reader, read Twitter lists instead of feeds, and find spaces where real discussion still happens (e.g. LessWrong and Lobsters still both seem slop-free).

I had not heard of LessWrong before - thanks for the recommendation!

Whenever I see a potentially interesting link (based on the title and synopsis if one is available) I feed it into my comment aggregator[1] and have a quick scan through (mostly) human commentary before committing to reading the full content, especially if it is a longer piece.

The reasons behind this are two-fold; one, comments from forums tend to call out AI slop pretty quickly, and two, even if the content body itself is slop, the interesting hook (title or summary) is often enough to spark some actually meaningful discussion on the topic that is worth reading.

[1]: https://kulli.sh

IshKebab

Fair warning, the LessWrong people have a lot of very strange ideas. Even more than HN.

hatefulmoron

I used to read a lot of LessWrong. These days I would recommend people to avoid it. The content is thought-provoking, written by well-meaning intelligent people.

On the other hand, it's like watching people nervously count their fingers to make sure they're all still there. Or rather, it's not enough to count them, we have to find a way to make sure we can be confident with the number we get. Whatever benefit you get from turning off the news, it's 10x as beneficial to stop reading LessWrong.

bsnnkv

This and the child comment are a great example of why I always read the comments first :)

titanomachy

> given that LLM generations hue towards the preference

I think this is "hew". But the error gives me slightly higher confidence that a human wrote this.

dcreater

I think its actually better if AI slop realizes the dead internet theory. That will be the only thing that forces us to evolve and re-build the internet embodying the early visions of what it was supposed to be. The internet is already filled with SEO trash, AI slop would be a marginal upgrade actually but unfortunately it still insidiously poisons the reliability of information.

thfuran

I'm not convinced it would be enough to incite that kind of change.

rednafi

I use LLMs as a moderately competent editor, but AI can’t be a substitute for thought. Sure, it can sometimes generate ideas that feel novel, but I find the disinfectant-laced, sanitary style of writing quite repulsive.

That said, we give too much credit to human writing as well. Have we forgotten about the sludge humans create in the name of SEO?

zoogeny

I worked at a startup that has a blog. They used to pay a content farm company some monthly amount to generate a certain number of useless blog posts per week. Honestly, I don't think any value would be lost if they changed (and I would guess they have already) to using AI to do this task.

I see a good number of creative types lamenting this new world and hyping the word "slop" all of the time. It should be obvious to any one with a functioning brain that the world will separate into those who can use AI to be creative and those who can't. Anyone can buy a pencil but an artist can use the pencil to create something much better than the average person. Future artists will be able to use AI tools to create things much better than the average person even if they both have access to the same tools.

TheOtherHobbes

This sounds appealing, but there are two problems.

The first is that AI tools to date are incredibly limited and rigid. MJ is fixated on some arty poses (e.g. portraits with the face tilted back, eyes closed) so a lot of output defaults to those. This wouldn't be so bad if MJ gave you fine control over poses, colour, and so on. But those elements are so entwined in latent space that if you try the same prompt with a different colour you get a completely different result.

The second is that it may not matter. Human slop had taken over the Internet long before AI happened. (Content farm SEO writing, mediocre self-published genre fiction, mediocre genre art, low-effort formulaic video/movie content from the big studios, and so on.)

What's needed is inspired curation and gatekeeping. That's still happening in art to some extent, but it's a foreign concept to most of the creative industries.

So what you get is a conservative cultural process which selects unoriginal unchallenging work, especially if it's supported by effective marketing.

AI curation would be super useful, as an antidote - not just in the arts, but elsewhere.

You can imagine trained AI agents hunting through the slop and finding the gems/stand-out creators, which would add some interesting evolutionary dynamics.

zoogeny

Let's imagine your world, the one where an AI agent hunts through the slop and finds the stand out creators.

How would you feel if all of the stand-out creators it found were all AI? If your answer is "well, that wouldn't happen" then you may be committed to a view for ideological reasons.

Also consider that the feeds for Instagram, TikTok, YouTube etc. are more or less what you are asking for and they exist right now.

daveguy

So far I've only seen AI tools used to make things more average. I'm not saying you're wrong, but I'm not sure the tools are up to making the artist more productive. This applies to the generative AI's only. Better editing tools certainly help and they help today, but I wouldn't call them intelligent on the level people are expecting agents to be.

_DeadFred_

Corporate blog posts are often times more for SEO and/or social proof that a company isn't dead. There's normally minimal new information conveyed (especially in the OPs case where it's not an internal domain expert but outside agency, so very simple concepts/basic news) so how much does it matter if the content is average?

daveguy

Yeah, I'm not so concerned about corporate blog posts. I've never trusted that to be anything more than manipulative pulp.

65

No, future artists will make things that specifically cannot be made by AI. Because creativity and art is more a feeling and an opinion about your unique perception of life than some garbage being spit out by an AI.

zoogeny

Some future artists will make things that cannot be made by AI and some future artists will make things using AI. Unless you are arbitrarily deciding that the new definition of artist is "someone making art that explicitly doesn't use AI" which I suspect won't hold culturally.

As for the accusation that the only thing an AI can spit out is garbage, I think the classic cliché "garbage in, garbage out" applies. It is possible (and IMO likely) that the world will reward those who can get treasure out of AI and it will punish those who are only capable of getting garbage out of AI.

If you are the kind of person who believes that only garbage can come out of AIs then you will never be in the group that gets treasure out of an AI.

saagarjha

Personally I write for other humans rather than AI, but you do you I guess

voidhorse

The whole "write for AI" thing is a bunk concept. It is confusingly stated because it implies we should think of AI as an audience of sorts—but that's not the case, what it really asks us to do is try to optimize for the illegal and undemocratic co-option of our content by companies, when we should in fact demand changes to material conditions to stop giving these companies carte blanche.

benatkin

Whatever floats Gwern’s boat I guess.

I don’t think they’re doing it out of a personal preference but because with what they’ve learned about LLMs it makes sense. I think in particular it seems to be less about rules than linguists thought.

scotty79

You used to and you like to believe you do but the reality changes and will change even more.