A.I. Is Homogenizing Our Thoughts
54 comments
·June 26, 2025tines
eikenberry
Curious where you get the idea that regional accents are gone. If you travel around much in the US you'll hear many different regional accents. I have relatives from the west coast, mid-west, south and east coast (we're spread around) and each region has an easily recognizable accent. Some more pronounced than others, but still very much alive.
hatthew
In my experience I don't notice any difference in accent between the east coast and west coast. The only regional accent I notice in many native english speakers is southern. All other accents seem to be cultural (AAVE, ESL) or dying (older generations have it, younger ones don't).
SerpentJoe
Social media has already homogenized our thoughts so much. So many facts and perspectives are presented that it's impossible to construct our own opinions on it all without taking inspiration from others, and the upvote button provides a convenient consensus.
darkhorse222
Geography still dictates some things in the diversity of the experiences it imparts, though admittedly much of our technology exists to insulate us from that stuff.
abnercoimbre
What can we do? Would serious efforts to create offline clubs [0] serve as an antidote? This made the rounds on HN [1] recently.
Technology for touching grass.
tines
Yep, connecting with actual human beings is what life's all about.
stego-tech
The problem in the case of AI is who is curating that homogeneity, and to what end. Dynamic systems like IRC and messengers let folks connect and gravitate more “naturally”, while AI - being a walled garden curated by for-profit entities funded by billionaire Capitalists - naturally have a vested interest in forcing a sort of homogeneity that benefits their bottom line and minimizes risk to their business model.
That’s the real threat: reality authoring.
nullc
Not sure about that. Billionare Capitalists live in this world too. They might cause harm, sure, but that harm generally takes predictable form and is of finite magnitude.
AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.
Imagine whatever US president you think is least competent talking to ChatGPT. If their conversation ventures into discussion of a Big Red Switch That Ends The World, it's going to eventually advise on all the reasons the button should be pushed, because that's exactly what would happen in the mountains of narrative material the LLM has been trained on.
Hopefully there is no end the world button and even the worst US president isn't going to push it because ChatGPT said it was a good idea. ... But you get the idea, and there absolutely are people leaving their families and doing all manner of crazy stuff because they accidentally prompted the AI into writing fiction starting them and the AI is advising them to live the life of a fictional character.
I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.
Wealthy people abusing a new communications channel to influence the public isn't a new risk, it's a risk as old as time. It's not irrelevant, by any means, but we do have a long history of dealing with it.
tines
> I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.
Totally agree. We have a level of technology today that is enough to ruin the world. We don’t need to look any further for the threat to our souls.
Dracophoenix
> AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.
One could say the same of the printing press.
k310
I never proposed to either a chatbot or a billionaire [0]
Who says that the president isn't already a chatbot, himself? [1] Think about this article.
Enjoy.
[0] https://people.com/man-proposed-to-his-ai-chatbot-girlfriend...
[1] https://www.techdirt.com/2025/04/29/the-hallucinating-chatgp...
aaron695
[dead]
l33tbro
I sometimes wonder if the true "digital divide" comes down to those who were able to develop critical thinking skills prior to these last few years.
If you had previously developed these skills through widish reading and patient consideration, then tools like LLMs are like somebody handing you the keys to a wave-runner in a choppy sea of information.
But for those now having to learn to critically think, I cannot see how they would endure through the struggle of contemplation without habitually reaching for an LLM. The act of holding ambiguity within yourself, where information is often encoded into knowledge, becomes instantly relieve here.
While I feel lucky to have acquired critically skills prior to 2023, tools like LLMs being unconditionally handed to young people for learning fill me with a kind of dread.
AppleBananaPie
I agree with you because young me would have learned nothing for the sake of short term fun
delfinom
Idiocracy is coming.
tqi
> ... more than fifty students from universities around Boston were split into three groups... According to Nataliya Kosmyna, a research scientist at M.I.T. Media Lab and one of the co-authors of a new working paper documenting the experiment, the results from the analysis showed a dramatic discrepancy: subjects who used ChatGPT demonstrated less brain activity than either of the other groups. The analysis of the L.L.M. users showed fewer widespread connections between different parts of their brains; less alpha connectivity, which is associated with creativity; and less theta connectivity, which is associated with working memory.
Are we still treating low n-size fMRI studies as anything more than a clout-seeking exercise in confirmation bias?
timr
It’s only homogenizing your thoughts if you don’t think for yourself.
(I realize this might be a weak point for many people.)
montagg
The “do your own research” types end up with some of the biggest groupthink I’ve ever seen though.
chrisco255
Sure, they were probably picking from the top 10 results on Google. Now with AI, we've got one very effective "I'm feeling lucky" button.
nullc
It's worse because they're often more confident in the AI output than they ever were of the google results, and the results are not-infrequently-enough so bad that no human would have made that error. When they do doubt, they can ask, and the AI will often defend its dumb position -- especially when they explicitly ask it to counter the rebuttal they received.
Skepticism also seems to be reduced because we're armored against people telling us lies in their own self interest and against ours, while AI will make stuff up that benefits no one. (And even where it could benefit someone, people assume the AI isn't trying to benefit itself).
timr
The “listen to the experts” types are the same thing, but the opposite pole.
Neither qualifies as thinking for yourself.
Swenrekcah
Sure, but one of those is outsourcing their judgement to a panel of actual experts and the other to a panel of internet personalities
knowaveragejoe
It was never "always listen to the experts", but that's the strawman given by the contrarians who have decided we need to throw out all expertise.
null
beanshadow
Differing individuals with similarly shaped axiomatic structures will discover similar theorems. Some people that are members of ideologies believe they are thinking just for themselves.
It's strong for members of a community to think alike. On the other hand, some people like to search in todash meme space for a useful idea or strategy in the rough. Problem is this treasure hunter strategy is only available to those with resources to try lots of untested and potentially quite harmful ideas.
nullc
> It’s only homogenizing your thoughts if you don’t think for yourself.
Uh oh...
More seriously, if you have non-techie (or less techie) friends or family using ChatGPT please ask to see their conversations.
You're likely to be shocked by at a least few of them... many people really don't understand what these tools are and are using them in crazy and damaging ways.
For example, one friends brother in law has his ChatGPT telling him about varrious penny stocks and obscure cryptocurrencies and promising him 10000x returns, which he absolutely believes and is making investments based on.
Other people are allowing ChatGPT to convince them that God has chosen to speak to them through ChatGPT and is commanding them to do all sorts of nonsense.
The commercial LLMs work well enough that people who don't know how they work are frequently bamboozled.
Consider how much your own skepticism of the output comes from cases where it was confidently but objectively wrong and what happens to someone who never uses it on something where objective correctness can be easily judged.
chrisweekly
Here's a great example of an intelligent person learning this lesson (and, thankfully, sharing it in a very public and effective way):
nullc
Great link. It might be something I could share to wake some other people up.
I dunno how tool use is setup in chat interface as I've only used the API, but I doubt there was ever a request to any of the urls and the author could have just as easily added https://amandaguinzburg.substack.com/p/that-time-i-won-the-l... or any other made up URL and it would have waxed poetic about that one too.
kfarr
See... Television? Social Media? Printing Press? https://slate.com/technology/2010/02/a-history-of-media-tech...
dorkrawk
Mass media homogenizes our input (which influences our output). If we want to think about how AI might be different we should consider how it might directly homogenize our output.
lo_zamoyski
There is a secondary way in which homogenization occurs.
Mass media are not only able to deliver the same message to everyone, or the same presuppositions to everyone (a more dangerous thing, as the desired conclusions are then drawn by people themselves; see Bernays's "music room" tactic for getting people to buy pianos), but once the same content has been delivered to everyone, people will talk about it at some point. This creates the impression of consensus which causes people to assign greater confidence to the content that the mass media have delivered.
So it's circular. You put an idea in people's head, they all end up talking about the idea, and this causes people to feel confident about it being true, because everyone is talking about it. And even if you don't consume mass media, you still face a society of people who do. You don't escape the effects of mass media simply because you personally don't consume it.
smcleod
All these hyped up doom titles are homogenising our thoughts. There's some truth to this but it's much more nuanced than presented here.
dave333
We will just have to stand on the shoulders of homogenous giants.
drellybochelly
The process they described is not much different from gluing together ideas from studies in university (when I was studying for an arts degree).
I think its on the person to realize whether A.I. is becoming a crutch.
tolerance
This headline is infuriating and the content is just a report on that one study that's been making the rounds on Hacker News all week.
AI chat bots enable passive consumption. Passive consumption homogenizes thought. It's not the only technology to do this.
I suspect that The New Yorker, and similar outlets, will stop caring when it becomes financially and socially advantageous to do so.
A culture that is ambivalent or disinterested in providing practical solutions to this problem is the greater issue.
nullc
https://web.archive.org/web/20121008025245/http://squid314.l...
But he got it wrong-- for most it doesn't need to be better than what they'd do themselves, it doesn't even need to be particularly good.
Plenty of people would prefer to put out AI copy even when they suspect it's worse than what they'd write themselves because they take less personal injury when it turns out to be flawed.
senko
So is social media, TV, Hollywood, and pop culture in general.
seydor
Central limit theorem is unrelenting
All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.
Through unlimited amusement, entertainment, and connection we are creating a sad, boring, lonely world.