Don't Fall for AI: Reasons for Writers to Reject Slop
49 comments
·July 17, 2025roadside_picnic
JKCalhoun
I have a set of "chord dice" that you roll and then write a song around those chords.
I have a set of "story telling dice" that you toss and use the result as a writing prompt.
High entropy has existed in other forms (as you point out) before LLMs.
viccis
Aleatoric music is mostly just "neat". I've never encountered any of it that is interesting or moving, with the exception of situations in which it is very carefully integrated as one small element of a composition. That includes the likes of Xenakis, who I think writes music whose main form of enjoyment is reading how it was composed. After doing that, listening is an unnecessary and often off putting step.
spijdar
I find this sort of thing discouraging, as a guy who dreamt of being a novel writer long before becoming a “computer person”.
The last few days, I let the intrusive thoughts win, and I played around with automating the process of building themes, characters, outlining, drafting, and revising a novel with the Gemini API, pausing between steps to manually edit each document. It’s crude, but with enough cycles of “read the last draft, write instructions for improving it, redo everything with those instructions” the end result is shockingly not terrible.
It’s not great. Good might even be too far. It’s derivative, and still feels like the embodiment of all the negative connotations of the term “genre fiction”.
Yet, I can’t escape the fact that it’s better reading than what I write. It is objectively less intellectually “interesting”, and it doesn’t have my “voice”, my artistic fingerprint. But it’s entertaining enough that I could see myself reading it at bedtime for fun, a sentiment I’ve never felt for my own writing.
And all that for a pittance of the effort it takes to write a long story. I’m still not sure how to feel about it. It’s sapping my willpower to continue writing “for real”, in the face of being able to “give life” to the characters and story ideas I’ve had languishing for a decade. I know that it’s not “real”, that the stories are superficial, and that the existence of these models is at best ethically questionable.
But for stories that, either way, I’ll probably never share with anyone else, it’s hard to feel that principled about it, in the face of a miserable comparison between my prose and an LLM’s prose. I’m sure if I wrote fiction for a living, I’d feel as passionate as the article’s author, but in my case, it’s just the melancholy of mediocrity. Ah well :-)
canpan
> Yet, I can’t escape the fact that it’s better reading than what I write
I worry we will have a lot less good writers or artists in the future. Everyone starts bad, without the skill. The hurdle is, why should I put in effort to learn, if AI is already somewhat good.
Someone mentioned here before, we learn to judge skill before we can learn the skill itself. The drive to jump the gap is what creates the genius.
Please don't give up, you can do it!
mattigames
I don't think that AI being somewhat good is the main stopper, it's that we know for certain that it will improve at a giant pace, the very same prompt that triggers a mediocre book today will trigger a good book tomorrow (as an aside, I firmly believe that the word "trigger" is way more apt than "create" when talking about AI output), there is billions of dollars invested to make sure of that, and is not like we haven't seen unbelievable leaps already for art generation.
blibble
> it's that we know for certain that it will improve at a giant pace
how exactly?
it's already trained on the entire corpus of human generated text and outputs garbage
there's not a second internet to plagiarise
the_af
> the very same prompt that triggers a mediocre book today will trigger a good book tomorrow
"Good" in which sense? That people read it and/or pay for it? But people already did, before LLMs: they read and paid for the most terrible, cliched, trite stuff. I mean, there are whole genres that are basically trash, before anyone even dreamed of AI (I'm pretty sure 90% of mainstream Hollywood script writers can be replaced by an LLM; they already feel like they were written by one anyway. This is not praise of LLMs, it's criticism of Hollywood!).
Surely, then, a "good" book is not merely something people will read or pay for. So why would AI become "good" at it, in which sense?
Reading/writing is a human activity. If you cut humans from a big part of the loop, how can the result ever be good?
This isn't the same context as writing code or building apps.
Avshalom
>Yet, I can’t escape the fact that it’s better reading than what I write.
I mean most obviously: that's because you didn't write it. It has a novelty to it that you don't experience when you write and re-write a story yourself.
More importantly though: as G.K. Chesterton supposedly said "Anything worth doing is worth doing poorly". The idea that you shouldn't write because you're not as good as you could be or you shouldn't plink around on keyboard because you can't play Bach is an idea that destroys any human endeavor and all human joy.
If they are stories that you will never share: why care about quality; it should be pure self exploration.
twilight-code
AI can generate text, but it cannot truly 'create' - it remixes.
kace91
Humans aren’t competitive at chess against computers and haven’t been for a long time. Yet the game is as popular as ever, and people watch human players rather than AIs.
We like playing. We like human touch. That’s still there.
bluefirebrand
Just wait until you're actually watching AI generated video of humans that don't even exist playing chess matches that were never played in real life
Isn't the future going to be so great?
threetonesun
Why would I watch those. The obvious reaction to all of this is going to be people leaving the house and seeing things in real life.
Sure, some people won’t, but we’re already at the point where AI has ruined any sense of reality online.
bulatb
This feels like Marlo stealing candy and a guard who won't accept that "it's the other way."
"Please stop because you're hurting me" is not a plea that works on somebody who answers: "lol." Or just doesn't care.
I think this illustrates a pretty devastating metaproblem:
1. Moralizing doesn't work.
2. People choose to moralize in spite of how it never works because they value feeling righteous over solving problems.
3. Arguments for people to stop moralizing pretty much boil down to moralizing (about values that lead people to choose moralizing over things that work).
4. Moralizing never stops because the anti-moralizing arguments don't work on people who don't want to stop.
5. Because they're moralizing arguments and moralizing doesn't work.
lexandstuff
It used to be that if I saw a typo or grammatical error in someone's writing, I'd switch off, thinking the author really didn't care that much about the text they're writing to proofread it. Now, the complete opposite. Leaving in typos and such shows a clear signal that the author cared enough about what they're writing not to out source to AI.
Related to that, I saw a local band posting marketing material online, that was this kind of amateurish typography with a collage of photos decorated with coloured markers. 2 years ago I'd be laughing at what a terrible job it was, today, it's a breath of fresh human air from all the slop we're subjected to all over the internet. It caught my attention, so much that I'm going to see the band this weekend.
gerdesj
"It used to be that if I saw a typo or grammatical error in someone's writing,"
It's not quite that simple. Many moons ago I taught RSA IT skills levels 2 and three. Hmmm I used a plural for levels and a literal 2 and spelt out three.
You are probably not 50+ years old and have not had to run anti spam email systems for several decades! When you are deciding whether something is created by something other than is claimed, you need way more "rules" than typos and that.
Look at the language in use: A fair sign of AI is banality, verbosity and obsequiousness.
Please don't look upon lazy spelling and grammar as a sign of authenticity: "Its how real people work" - it isn't. That will be mercilessly abused by the baddies. Unfortunately we will all have to raise our game and be proactive in spotting baddies.
Also, please don't become too worried about all this stuff. The bubble will eventually burst.
You be you and look after yourself. Take care.
analog31
I have a rule that if something seems more literate than the person who wrote it, they probably didn't write it.
Also, the vast majority of stuff ever written isn't worth reading, so filtering your feed for stuff that's worth reading isn't new to the AI age.
ben_w
Zig when others zag. This too shall pass, and will be forgotten.
Flash websites no one watched. Carousels. Consultants saying "we need a viral". Every product needed a MySpace page, to be prefixed with an "i", or to have most vowels removed. Blue-and-orange film posters.
All those trends will be lost in time, like tears in rain.
boznz
Any spelling or grammar mistake is enough for me to re-publish one of my e-books (which usually takes about 20 minutes). The reason is that even a simple error may be enough to pull your reader out of their fantasy space and back into the real world, and as an avid reader myself I would prefer that did not happen.
dsign
I remember with fondness the typos in one of Terry Pratchett books, left there not by him but by one in his army of editors :-) .
Aeolun
Hmm, I don’t think using the AI is all that different from using grammarly. The big problem is that it won’t make terrible stories good. It can still make good stories terrible though.
JKCalhoun
I've never been convinced by point 2 (AI Outputs Are Stolen From People Like You).
Every artist has stolen. I mean, that's probably putting too fine a point on things, but you'd have to show me a painting someone has created where they never saw another artist's work before. Or a book written by someone who never read a book before.
I drew all the time as a kid — making a point at age 12 to learn to draw the human figure. I started with the standard proportions that every decent book on drawing the human figure puts forth. I started with shapes representing the hips, the rib cage, the skull — you sketch lines determined by muscles over those hard structures. You draw the clavicle, divot defining the knee caps, suggest the inverted triangle over the figures back-side, shoulder blades protruding....
And in time I started looking at how Mort Drucker drew mouths. How another MAD artist did pockets on short-sleeve shirts. How Angelo Torres draws the ears....
In time you become an amalgam of your favorite bits and pieces of your favorite artists.
(And then you find out that R. Crumb was lifting styles from Warner Brothers, etc. when he was ramping up his craft. But of course he did.)
nkrisc
Conflating this to humans learning is missing the point. Everyone is very aware of what you’re talking about, and no one cares, that’s not the problem. Humans learning from art and copying is not the problem. They care when it’s AI done by corporations at an industrial scale.
The fact it’s AI is the issue.
null
protocolture
Its weird to me that I see so many of these posts that dont seem to have any reference to LLMs that have been purpose built for narrative. I guess if you had the correct information, you wouldnt make proudly incorrect blog posts.
abtinf
It doesn’t mention the number one most important reason: the output is absolute garbage.
I can spot AI writing very quickly now, after just a few sentences or paragraphs. It became a lot easier to spot after I tried to use it in my own writing.
Calling it “slop” is far too generous.
If you know what you want to say, you might think to yourself “I’ll have this write an outline or a first draft that I will then thoroughly edit.”
And every time, what you’ll find is that the LLM output is fundamentally unusable. Points a subtly missing. Points are subtly repeated. Points are miscategorized. Points don’t make sense at all. Points don’t flow in a logical order.
If you try to use an LLM and you don’t know what you want to say, then it’s hopeless. You absolutely will not see the defects. If anyone who knows the subjects reads it, they will instantly know you are a lying piece of shit.
null
JKCalhoun
> I can spot AI writing very quickly now, after just a few sentences or paragraphs.
Not denying this is true — but like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much.
I think it was actually Brian Eno that said it (essentially): whatever you laugh about with regard to LLMs today, watch out, because next year that funny thing they did will no longer be present.
capnrefsmmat
I don't think the AI companies are systematically working to make their models sound more human. They're working to make them better at specific tasks, but the writing styles are, if anything, even more strange as they advance.
Comparing base and instruction-tuned models, the base models are vaguely human in style, while instruction-tuned models systematically prefer certain types of grammar and style features. (For example, GPT-4o loves participial clauses and nominalizations.) https://arxiv.org/abs/2410.16107
When I've looked at more recent models like o3, there are other style shifts. The newer OpenAI models increasingly use bold, bulleted lists, and headings -- much more than, say, GPT-3.5 did.
So you get what you optimize for. OpenAI wants short, punchy, bulleted answers that sound authoritative, and that's what they get. But that's not how humans write, and so it'll remain easy to spot AI writing.
bluefirebrand
> like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much
People have been saying this for years now though
sandspar
I like that Brian Eno quote. If I recall correctly, he was also referring to nostalgia. Like, once the technology improves, you begin to miss the old rough edges. I know that I love seeing old images of Google DeepDream, for example.* It's the same reason why young people miss Playstation 2 blocky graphics, or why photographers sometimes edit their images for unreal Kodachrome color. The things annoy us today are the very things that we'll miss the most.
JulieHenne
you can check this post: https://news.ycombinator.com/item?id=44598695
_def
I think it is possible to create art with so called AI. Just not in the way people usually tend to think or expect. Imitating what already exists gives you slop. But used as tools, you can try to create pieces of art that are genuinely valuable. It's just not making it easier. It's more difficult if anything. But not impossible. And some people trailblaze with it.
cobbal
Probably possible. But if I see something that is AI, I won't bother to engage with it. I'm happier to engage with a human-written piece of bad writing, because it's a meeting of two human minds. There's some innate value to that. Me trying to understand an AI's mind? Not a worthwhile endeavor.
It gets more complicated by the fact that many people don't mark what's AI and what's not, and harder to be certain every day. Many people putting out the slop will have different priorities, and don't care if they're wasting my time.
Meandering back to your point, I would be happy to look at AI-co-created art as long as I knew in advance that it significantly expressed the mind of the human who created it.
Since I can't get people to mark what is AI, I've instead considered signing off all my writing with:
(The above was written by a human without assistance from AI)
JKCalhoun
I'm still haunted by the "AI slop" called "Jodorowsky's Tron" [example 1] that went viral a bit ago. If an art director had taken those to represent sketches and then created costumed based on them, it would have made for a mind-blowing film.
(Come to think of it, sounds like a good cosplay opportunity. Go as "Jodorowsky's Tron AI Slop".)
[1] https://static.wixstatic.com/media/9414a3_977e028d2ca6472294...
n42
why do I get so annoyed every time I read the word "slop" used like this? I have the same reaction with "enshittification". am I just getting grumpy and old?
it triggers the same eye roll as the schoolyard bully nicknames so popular in politics right now. bite sized, zero effort, fashionable take downs that suffocate any attempt at genuine discourse.
but I am probably just grumpy and old.
lexandstuff
Personally, I think it's a perfect word for what it is: carelessly created content that no one wants.
charcircuit
Except you also see people complain about how many likes or views they get on social media. There is signal that people like a subset of "slop".
JKCalhoun
Worse (?) a lot of what actual humans write is slop.
maxbond
I think these words are useful because they convey a feeling of disenchantment people are experiencing with technology. "You say this is progress, but the experience keeps getting shittier. You say this model's output is the next big thing, but my plate is filled with indistinguishable slop."
I would point out what they're criticizing is also lazy and driven by trends, the reflexive acceptance that whatever is new is inevitable and must be embraced. To me "slop" especially feels splashing someone with a bucket of water to try and wake them up from a stupor.
JKCalhoun
I feel as you do but I also recognize that I am a bit defensive with regard to LLMs.
And maybe I'm a little too optimistic? Because I see a world in a few years when AI is producing content good enough that those still calling it "slop" will come across as sounding a little shrill.
anton-c
If it's AI art/vid with an AI voice reading an AI script(as has become common on youtube) it will always be slop, regardless of how high quality the output is.
I want to know a person's ideas, not a computers regurgitation of others'. It's low effort and usually lacks a point.
Now it doesnt have zero usefulness in writing/the arts. Probably tons tbh. For instance someone using AI voice because they aren't an English speaker and want to talk to that audience, or using it to clean up grainy film is different (in my opinion) than genning the writing or art.
Things made without enough human in the loop - I've found - lack purpose and identity. I dont see AI changing there. If it wasn't a good idea from the start, ai isn't gonna fix that. No amount of awesome cgi or a-list actors saves a terrible script.
The only people I see pushing stuff like ai music is spotify so it doesn't have to pay royalties, but everyone I speak to hates it. The listeners, artists, and the record labels those models stole from. Probably instrument and audio software makers too. When people figure out a pic is AI they voice frustration and embarrassment.
There's more in the word 'slop' than just bad content. Comments/posts on here or reddit often get slaughtered soley because it was written by AI and the user wasn't skilled enough to hide it. Some people just don't like reading something a machine trying to sound like a person wrote.
I don't doubt we will advance to the stage where it becomes on the same level quality-wise, but doubt most people would be wanting AI content while human made stuff is available. It will still be considered low effort slop by many, I believe.
Labov
"Slop" is a bit snappier than "artificial cultural homogenization." Now that'd get some eye rolls.
add-sub-mul-div
New phenomena need new words.
throwawayoldie
It's easy to be grumpy in a world full of enshittified slop.
What's unfortunate is that there aren't enough people pushing LLMs to assist in writing in creative ways that really involve messing with the model.
I'm a huge William S. Burroughs fan, and, for those unfamiliar, he and a few others invented an algorithmic technique, the "Cut-up technique" [0], to basically remix their writing. It's a major part of the reason that much of Burroughs' work as a magically confusing aspect to it.
"Prompt and pasting" from LLMs is dull, but awhile back I was experimenting with token-explorer [1] to see what would happen if I started with a prompt and explored the "high-entropy" states of the LLM. By controlling the sample path to stay in a high-entropy state you start getting very different types of responses that feel like nothing that normally comes from an LLM. You could argue it's a form of "statistical automatic writing" [2]
There is tremendous potential for genuinely interesting writing to be created with an LLM but it's going to require popping open the box and playing around. In the Stable Diffusion world there's lots of people trying all sorts of odd experiments to create things and, while not the mainstay of generative AI images, they are able to create really interesting things.
I would love to see more people ripping open local LLMs and seeing just what the real posibilities are.
0. https://en.wikipedia.org/wiki/Cut-up_technique
1. https://github.com/willkurt/token-explorer
2. https://en.wikipedia.org/wiki/Automatic_writing