Skip to content(if available)orjump to list(if available)

LLMs can get "brain rot"

LLMs can get "brain rot"

113 comments

·October 21, 2025

andai

I encourage everyone with even a slight interest in the subject to download a random sample of Common Crawl (the chunks are ~100MB) and see for yourself what is being used for training data.

https://data.commoncrawl.org/crawl-data/CC-MAIN-2025-38/segm...

I spotted here a large number of things that it would be unwise to repeat here. But I assume the data cleaning process removes such content before pretraining? ;)

Although I have to wonder. I played with some of the base/text Llama models, and got very disturbing output from them. So there's not that much cleaning going on.

null

[deleted]

throwaway314155

> But I assume the data cleaning process removes such content before pretraining? ;)

I didn't check what you're referring to but yes, the major providers likely have state of the art classifiers for censoring and filtering such content.

And when that doesn't work, they can RLHF the behavior from occurring.

You're trying to make some claim about garbage in/garbage out, but if there's even a tiny moat - it's in the filtering of these datasets and the purchasing of licenses to use other larger sources of data that (unlike Common Crawl) _aren't_ freely available for competition and open source movements to use.

avazhi

“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

standardly

That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.

turtletontine

I think this article has already made the rounds here, but I still think about it. I love using em dashes! It really makes me sad that I need to avoid them now to sound human

https://bassi.li/articles/i-miss-using-em-dashes

janderson215

The em dash usage conundrum is likely temporary. If I were you, I’d continue using them however you previously used them and someday soon, you’ll be ignored the same way everybody else is once AI mimics innumerable punctuation and grammatical patterns.

AlecSchueler

Don't forget the "it's not just X, it's Y" formulation and the rule of 3.

hunter-gatherer

Lol. This is brilliant. I'm not sure if anyone else has this happen to them, but I noticed in college my writing style and "voice" woukd shift quite noticeably depending on whatever I was reading heavily. I wonder if I'll start writing more like an LLM naturally as I unavoidably read more LLM-generated content.

itsnowandnever

why do they always say "not only" or "it isn't just x but also y and z"? I hated that disingenuous verbosity BEFORE these LLMs out and now it'll all over the place. I saw a post on linked in that was literally just like 10+ statements of "X isn't just Y, it's etc..." and thought I was having a stroke

Starlevel004

GPT loves lists and that's a variant of a list

veber-alex

hehe, I see what you did there.

Jackson__

LLM slop is not just bad—it's degrading our natural language.

kcatskcolbdi

thanks, I hate it.

askafriend

If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.

grey-area

It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.

It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.

zer00eyz

> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.

Are you describing LLM's or social media users?

Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...

sailingparrot

> If it conveys the intended information then what's wrong with that?

Well, the issue is precisely that it doesn’t convey any information.

What is conveyed by that sentence, exactly ? What does reframing data curation as cognitive hygiene for AI entails and what information is in there?

There are precisely 0 bit of information in that paragraph. We all know training on bad data lead to a bad model, thinking about it as “coginitive hygiene for AI” does not yield to any insight.

LLMs aren’t going to discover interesting new information for you, they are just going to write empty plausible sounding words. Maybe it will be different in a few years.

At least, if you do it yourself, you are forced to confront the fact that you in fact have no new information to share, and do not waste your and your audience time by publishing a paper like this.

uludag

Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.

The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.

drusepth

Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.

stavros

The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".

moritzwarhier

What information is conveyed by this sentence?

Seems like none to me.

AlecSchueler

Style is important in writing. It always has been.

avazhi

If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.

Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.

nemonemo

What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?

null

[deleted]

binary132

The brainrot apologists have arrived

askafriend

Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.

pixelmelt

Isn't this just garbage in garbage out with an attention grabbing title?

icyfox

Yes - garbage in / garbage out still holds true for most things when it comes to LLM training.

The two bits about this paper that I think are worth calling out specifically:

- A reasonable amount of post-training can't save you when your pretraining comes from a bad pipeline; ie. even if the syntactics of the input pretrained data are legitimate it has learned some bad implicit behavior (thought skipping)

- Trying to classify "bad data" is itself a nontrivial problem. Here the heuristic approach of engagement actually proved more reliable than an LLM classification of the content

satellite2

Yes but the other interesting bit which is not clearly addressed is that increasing the garbage in to 100% does not result in absolute garbage out. So visibly there is still something to learn there.

philipallstar

Attention is all you need.

dormento

In case anyone missed the reference: https://arxiv.org/abs/1706.03762

> (...) We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.

echelon

In today's hyper saturated world, attention is everything:

- consumer marketing

- politics

- venture fundraising

When any system has a few power law winners, it makes sense to grab attention.

Look at Trump and Musk and now Altman. They figured it out.

MrBeast...

Attention, even if negative, wedges you into the system and everyone's awareness. Your mousey quiet competitors aren't even seen or acknowledged. The attention grabbers suck all the oxygen out of the room and win.

If you go back and look at any victory, was it really better solutions, or was it the fact that better solutions led to more attention?

"Look here" -> build consensus and ignore naysayers -> keep building -> feedback loop -> win

It might not just be a societal algorithm. It might be one of the universe's fundamental greedy optimization algorithms. It might underpin lots of systems, including how we ourselves as individuals think and learn.

Our pain receptors. Our own intellectual interests and hobbies. Children learning on the playground. Ant colonies. Bee swarms. The world is full of signals, and there are mechanisms which focus us on the right stimuli.

ghurtado

Something flew approximately 10 miles above your head that would be a good idea for you to learn.

peterlk

You’re absolutely right!

alganet

You're not accounting for substrate saturation.

If you could just spam annoy until you win, we'd be all dancing to remixed versions of Macarena.

lawlessone

Is this copypasted from LinkedIn?

ashleyn

Yes, but the idea of chatgpt slowly devolving into Skibidi Toilet and "6 7" references conjures a rather amusing image.

1121redblackgo

6-7 ٩(●•)_

stavros

Can someone explain this? I watched a South park episode that was all about this, but I'm not in the US so I have no idea what the reference is.

wat10000

Considering that the current state of the art for LLM training is to feed it massive amounts of garbage (with some good stuff alongside), it seems important to point this out even if it might seem obvious.

CaptainOfCoit

I don't think anyone is throwing raw datasets into LLMs and hoping for high quality weights anymore. Nowadays most of the datasets are filtered one way or another, and some of them highly curated even.

BoredPositron

I doubt they are highly created you would need experts in every field to do so. Which gives me more performance anxiety for LLMs because one of the most curated fields should be code...

otterley

And with extra steps!

Insanity

Garbage in -> Magic -> Hallucinated Garbage out

Barrin92

Yes, I am concerned about the Computer Science profession

>"“Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI"

A metaphor is exactly what it is because not only do LLMs not possess human cognition, there's certainly no established science of thinking they're literally valid subjects for clinical psychological assessment.

How does this stuff get published, this is basically a blog post. One of the worse aspects of the whole AI craze is that is has turned a non-trivial amount of academia into a complete cargo cult joke.

bpt3

It is a blog post, it was published as a Github page and on arXiv.

I think it's intended as a catchy warning to people who are dumping every piece of the internet (and synthetic data based on it!) that there are repercussions.

pluc

I think it's an interesting line of thought. So we all adopt LLMs and use it everywhere we can. What happens to the next generation of humans, born with AI and with diminished cognitive capacity to even wonder about anything? What about the next generation? What happens to the next generation of AI models that can't train on original human-created datasets free of AI?

gowld

arXiv is intended to host research papers, not a blog for researchers.

Letting researchers pollute it with blog-gunk is an abuse of the referral/vetting system for submitters.

commandlinefan

My son just sent me an instagram reel that explained how cats work internally, but it was a joke, showing the "purr center" and "knocking things off tables" organ. It was presented completely seriously in a way that any human would realize was just supposed to be funny. My first thought was that some LLM is training on this video right now.

null

[deleted]

gaogao

Brain rot texts seems reasonably harmful, but brain rot videos are often surreal and semantically dense in a way that probably improves performance (such as discussed on this German brain rot analysis https://www.youtube.com/watch?v=-mJENuEN_rs&t=37s). For example, Švankmajer is basically proto-brainrot, but is also the sort of thing you'd watch in a museum and think about.

Basically, I think the brain rot aspect might be a bit of terminology distraction here, when it seems what they're measuring is whether it's a puff piece or dense.

f_devd

I do not think this is the case, there has been some research into brainrot videos for children[0], and it doesn't seem to trend positively. I would argue anything 'constructed' enough will not classify as far on the brainrot spectrum.

[0]: https://www.forbes.com/sites/traversmark/2024/05/17/why-kids...

gaogao

Yeah, I don't think surrealism or constructed is good in the early data mix, but as part of mid or post-training seems generally reasonable. But also, this is one of those cases where anthropomorphizing the model probably doesn't work, since a major negative effect of Cocomelon is kids only wanting to watch Cocomelon, while for large model training, it doesn't have much choice in the training data distribution.

moritzwarhier

For this reason, I believe thar the current surge we see in AI use for people manipulation (art is also a form of manipulation, even if unintended) is much more important than their hyped usage as a technical information processors.

Brainrot created by LLMs is important to worry about, their design as "people pleasers".

Their anthropomorphization can be scary too, no doubt.

thelastgallon

If most of the content produced by younger generations is about skibidi toilet[1] and 67[2], isn't that what LLMs are going to be trained on?

[1] https://en.wikipedia.org/wiki/Skibidi_Toilet

[2] https://en.wikipedia.org/wiki/6-7_(meme)

micromacrofoot

only if the trends last long enough (which they rarely do!), skibidi is already old news according to some kids I know

ciaranmca

Agreed, “ Popularity as a better indicator”. Hypothetically you could look at popularity over time to filter out viral rot content and work out if people feel the content is useful.

rriley

This paper makes me wonder the long lasting effects of the current media consumption patterns by the alpha-gen kids.

AznHisoka

why just kids?

rriley

I am mostly concerned with the irreversibility part. More developed brains probably would not be affected as much.

jama211

Have you opened facebook recently? Seems the older folk are plenty affected to me.

vanderZwan

I recently saw an article about the history of Sesame Street that claimed that in the late 1960s American preschool kids watched around twenty-seven hours of television per week on average[0]. And most of that was not age-appropriate (education TV had yet to be invented). So maybe we should check in on the boomers too if we're sincere about these worries.

[0] https://books.google.se/books?id=KOUCAAAAMBAJ&pg=PA48&vq=ses...

conception

This is a potential moat for the big early players in a pre-atomic steal sort of way as any future players won’t have a non-AI-slop/dead internet to train new models on.

earth2mars

duh! isn't that obvious. is this some students wanted a project with pretty graphs on writing experience?! I am not trying to be cynical or anything. just questioning the obvious thing here.

killshotroxs

If only I got money every time my LLM kept looping answers and telling stuff I didn't even need. Just recently, I was stuck with LLM answers, all while it wouldn't even detect simple syntax errors...

buyucu

I don't understand why people have a hard time understanding 'garbage in, garbage out'. If you train your model on junk, then you will have a junk model.

nakamoto_damacy

Our metaphorical / analogical muscle is too well developed. Maybe there is a drug we can take to reduce how much we lean into it.

If you look at two random patterns of characters and both contain 6s you could say they are similar (because you’re ignoring that the similarity is less than 0.01%). That’s how comparing LLMs to brains feels like. Like roller skates to a cruise ship. They both let you get around.