We're Losing Our Voice to LLMs
237 comments
·November 27, 2025ricardo81
chemotaxis
> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
vanviegen
On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
MichaelZuo
Yeah this is a critical difference, most of the issues are sidestepped because everyone knows nobody can force a custom frontpage tailored for a specific reader.
So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.
actualwitch
Only when certain people don't decide to band together and hide posts from everyone's feed by abusing "flag" function. Coincidentally those posts often fit neatly in the categories you outlined.
anbotero
I want to agree with this. Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.
Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.
mnky9800n
MySpace was quite literally my space. You could basically make a custom website with a framework that included socialisation. But mostly it was just geocities for those who only might want to learn html. So it was a creative canvas with a palette.
cj
Right, but that’s slightly different.
I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.
Compare that to a typical flame war on HN (before the mods step in) or IRC.
On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.
On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.
Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.
Aurornis
> Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually
I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.
A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.
jon-wood
Or if the moderation was good someone would go “nope, take that bullshit elsewhere” and kick them out, followed by everyone getting on with their lives. It wasn’t obligatory for communities to be cesspits.
LogicFailsMe
I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.
Propelloni
It would just become part of the shitshow, cf. Grok.
everdrive
When video games first started taking advantage of behavioral reward schedules (eg: skinner box stuff such as loot crates & random drops) I noticed it, and would discuss it among friends. We had a colloquial name for the joke and we called them "crack points." (ie, like the drug) For instance, the random drops that happen in a game like Diablo 2 are rewarding in very much the same way that a slot machine is rewarding. There's a variable ratio of reward, and the bit that's addicting is that you don't know whenever next "hit" will be so you just keep pulling the lever (in the case of a slot machine) or doing boss runs. (in the case of Diablo 2)
We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.
I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.
ezst
I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.
femiagbabiaka
I know that some folks dislike it, but Bluesky and atproto in particular have provided the perfect tools to achieve this. There are some people, largely those who migrated from Twitter, who mostly treat Bluesky like a all-liberal version of Twitter, which results in a predictably toxic experience, like bizarro-world Twitter. But the future of a less toxic social media is in there, if we want it. I've created my own feeds that allow topics I'm interested in and blacklist those I'm not -- I'm in complete control. For what it's worth, I've also had similarly pleasant experiences using Mastodon, although I don't have the same tools that I do on Bluesky.
alt227
I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people. To me, only seeing things you know you are already interested in is no better than another company curating it for me.
gorbachev
I've mentioned this a few times in the past, but I'm convinced that filters that exclude work much better than filters that include.
Instead of algorithms pushing us content it thinks we like (or what the advertisers are paying them to push on us), the relationship should be reversed and the algorithms should push us all content except the content we don't like.
Killfiles on Usenet newsreaders worked this way and they were amazing. I could filter out abusive trolls and topics I wasn't interested in, but I would otherwise get an unfiltered feed.
stuartjohnson12
I think it's less about content topic and more meta content topic. EG I don't want to remove pictures of broccoli because I don't like broccoli, I'm trying to remove pictures of food because it makes me eat more. Similarly, I don't want to remove Political Takes I Disagree With, I want to remove Political Takes Designed To Make Me Angry. The latter has a destructive viral effect whose antidote is inattention.
Echo chamber is a loaded term. Nobody is upset about the Not Murdering People Randomly echo chamber we've created for ourselves in civilised society, and with good reason. Many ideologies are internally stable and don't virally cause the breakdown of society. The concerning echo chambers are the ones that intensify and self-reinforce when left alone.
femiagbabiaka
> I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.
You are the one who gets to control what is filtered or not, so that's up to you. It's about choice. By the way, a social media experience which is not "ultra filtered" doesn't exist. Twitter is filtered heavily, with a bias towards extreme right wing viewpoints, the ones it's owner is in agreement with. And that sort of filtering disguised as lack of bias is a mind virus. For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed." How did a seemingly normal person fall into that? One "unfiltered" tweet at a time, I suppose.
> To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I curate my own feeds. They don't have things I only agree with in them, they have topics I actually want to see in them. I don't want to see political ragebait, left or right flavoured. I don't want to see midwit discourse about vibecoding. I have that option on Bluesky, and that's the only platform aside from my RSS reader where I have that option.
Of course, you also have the option to stare endlessly at a raw feed containing everything. Hypothetically, you could exactly replicate a feed that aggregates the kind of RW viewpoints popular on Twitter and look at it 24/7. But that would be your choice.
rcxdude
At least when you do this you are aware of it happening. Algorithmic feeds can shift biases without you even noticing.
tucnak
> Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.
I have another wise-sounding soundbite for you: "I disapprove of what you say, but I will defend to the death your right to say it." —Voltaire. All this sounds dandy and fine, until you actually try and examine the beliefs and prejudeces at hand. It would seem that such examination is possible, and it is—in theory, whereas in practice, i.e. in application of language—"ideas" simply don't matter as much. Material circumstance, mindset, background, all these things that make us who we are, are largely immutable in our own frames of reference. You can get exposed to new words all the time, but if they come in language you don't understand, it's of no use. This is not a bug, but a feature, a learned mechanism that allows us to navigate massive search spaces without getting overwhelmed.
the_mitsuhiko
So far my experience is that unless you subscribe to the general narrative of the platform, the discover algorithm punishes you with directing the mob your way.
I had two of my Bluesky posts on AI being attacked by all kinds of random people which in turn has also lead to some of those folks sending me emails and dragging some of my lobster and hackernews comments into online discourse. A not particularly enjoyable experience.
I’m sure one can have that same experience elsewhere, but really it’s Bluesky where I experienced this on a new level personally.
femiagbabiaka
I saw that, and I'm sorry it happened. I thought both the response to your original post and the resulting backlash to both you and everyone who engaged with you sincerely were absurd. I think that because of atproto you have the flexibility to create a social media experience where that sort of thing cannot happen, but I also understand why you in particular would be put off from the whole thing.
jeromegv
I enjoy Mastodon a lot. Ad-free, algo-free. I choose what goes in my feed, I do get exposed to external viewpoints by people boosts (aka re-tweets) and i follow hashtags (to get content from people I do not know). But it's extremely peaceful, spam and bots are rare and get flagged quickly. There's a good ecosystem of mobile apps. I can follow a few Bluesky people through a bridge between platforms and they can follow me too.
That's truly all I need.
baiac
Doesn’t Bluesky have a set of moderation rules that guarantee that it will turn into bizarro-world Twitter?
erwincoumans
I tried Bluesky and wanted to like it. My account got flagged as spam, still no idea why. Ironically it could be another way of loosing ones voice to an LLM :)
femiagbabiaka
Well that's the thing -- you might be flagged as spam in the Bluesky PDS, but there are other PDS's, with their own feeds and algorithms, and in fact you can make your own if you so choose. That's a lot of work, and Twitter is definitely easier, but atproto means that an LLM cannot steal your voice.
embedding-shape
> My account got flagged as spam, still no idea why.
This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?
fortran77
If you follow certain people, various communities will, en mass, block you and report you automatically with software "block lists". This can lead to getting flagged as spam.
null
Lapel2742
> it's how the algorithms promote engagement.
They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.
IMTDb
> should be heavily regulated.
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
afavour
I’d favour regulation towards transparency if nothing else. Show what factors influence appearance in a feed.
Frieren
> But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape.
For example, we can forbid corporations usage of algorithms beyond sorting by date of the post. Regulation could forbid gathering data about users, no gender, no age, no all the rest of things.
> Calling it “regulation” is just a polite veneer over wanting control.
It is you that may have misinterpreted what regulations are.
Aurornis
I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.
Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.
Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).
rdiddly
Control is the whole point. One person being in charge, enacting their little whims, is what you get in an uncontrolled situation and what we have now. The assumption is that you live in a democratic society and "the regulator" is effectively the populace. (We have to keep believing democracy is possible or we're cooked.)
mentalgear
It's really not that complicated:
- Ban algorithmic optimization that feeds on and proliferates polarisation.
- To heal society: Implement discussion (commenting) features that allow (atomic) structured discussions to build bridges across cohorts and help find consensus (vs 1000s of comments screaming the same none-sense).
- Force the SM Companies to make their analytics truly transparent and open to the public and researchers for verification.
All of this could be done tomorrow, no new tech required. But it would lose the SM platforms billions of dollars.
Why? Because billions of people posting emotionally and commenting with rage, yelling at each other, repeating the same superficial arguments/comments/content over and over without ever finding common ground - traps a multitude more users in the engagement loop of the SM companies than people have civilised discussions, finding common ground, and moving on with a topic.
One system of social media that would unlock a great consensus-based society for the many, the other one endless dystopic screaming battles but riches for a few while spiralling the world further into a global theatre of cultural and actual (civil) war thanks to the Zuckerbergs & Thiels.
trinsic2
By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.
Just like the community organizations we had that watched over government agencies that we allowed to be destroyed because of profit. It's not rocket science.
vladms
My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
Lapel2742
They seem to artificially create filter bubbles, echo chambers and rage. They do that just for the money. They divide societies.
For example:
(Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth)
> First, there is a consistent observation across computational audits and simulation studies that platform curation systems amplify ideologically homogeneous content, reinforcing confirmation bias and limiting incidental exposure to diverse viewpoints [1,4,37]. These structural dynamics provide the “default” informational environment in which youth engagement unfolds. Simulation models highlight how small initial biases are magnified by recommender systems, producing polarization cascades at the network level [2,10,38]. Evidence from YouTube demonstrates how personalization drifts toward sensationalist and radical material [14,41,49]. Such findings underscore that algorithmic bias is not a marginal technical quirk but a structural driver shaping everyday media diets. For youth, this environment is especially influential: platforms such as TikTok, Instagram, and YouTube are central not only for entertainment but also for identity work and civic socialization [17]. The narrowing of exposure may thus have longer-term consequences for political learning and civic participation.
__MatrixMan__
I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.
rdtsc
> I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
criddell
A social network can be great. Social media — usually not.
Something like Instagram where you have to meet with the other party in person to follow each other and a hard limit on the number of people you follow or follow you (say, 150 each) could be an interesting thing. It would be hard to monetize, but I could see it being a positive force.
amrocha
Your loss.
Twitter was an incredible place from 2010 to 2017. You could randomly message something and they would more often than not respond. Eventually an opportunity would come and you’d meet in person. Or maybe you’d form an online community and work towards a common goal. Twitter was the best place on the internet during that time.
Facebook as well had a golden age. It was the place to organize events, parties, and meetups, before instagram and DMs took over. Nothing beats seeing someone post an album from last nights party and messaging your friends asking them if they remember anything that happened.
I know being cynical is trendy, but you genuinely missed out. Social dynamics have changed. Social media will never be as positive on an individual level as it was back then.
glitchc
Do LinkedIn as well. I got rid of it earlier this year. The "I am so humbled/blessed to be promoted/reassigned/fired.." posts reached a level of parody that I just couldn't stomach any longer. I felt more free immediately.
N.B. Still employed btw.
Aurornis
You can have a LinkedIn profile without reading the feed.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
tayo42
Yeah I just LinkedIn as a public resume and message system with recruiters. Though even that goes through my email
UnreachableCode
LinkedIn bothers me the least, even though it definitely has some of the highest level of cringe content. It's still a good tool to interact with recruiters, look at companies and reach out to their employees. The trick is blocking the feed with a browser extension.
isoprophlex
Sorting the feed by "recent" at least gives you a randomized assortment of self aggrandizement, instead of algorithmically enhanced ragebait
Aurornis
Better suggestion: Ignore the feed if you don’t like it.
Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.
I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.
nathan_compton
I have a special, deep, loathing for linkedin. I honestly can't believe how horrible it is and I don't understand why people engage with it.
hobofan
I don't understand how people can be so dismissive of LinkedIn purely for its resume function.
For essentially every "knowledge worker" profession with a halfway decent CV, a well kept LinkedIn resume can easily make a difference of $X0,000 in yearly salary, and the initial setup takes one to a few hours. It's one of the best ROI actions many could do for their careers.
How dismissive many engineers are of doing that and the justifications for that are often full of privilege.
amrocha
You have a special loathing for a site where you can message professional contacts when you need to?
Nobody is forcing you to use the social networking features. Just use it as a way to keep in touch with coworkers.
boxerab
This. Linkedin is garbage, yet I still use it because there are no competitors. This is what happens in a monoculture.
amrocha
Do you really want a “competitor” to linkedin? Do you really want to have to make and manage accounts on multiple sites because you need a job and you don’t know which a company uses?
Isn’t it better to have a single place you check when you need a job because everyone else is also there?
nameless_me
[dead]
jfindper
[dead]
drbojingle
No, there needs to be control over the algorithms that get used. You ought to be able to tune it. There needs to be a Google fuu equivalent for social media. Or, instead of one platform one algorithm, let users define the algorithm to a certain degree, using llms to help with that and then you can allow others to access your algorithms too. Asking for someone Facebook to tweak the algorithm is not going to help imo.
rcxdude
IMO there should not be an algorithm. You should just get what you have subscribed to, with whatever filters you have defined. There are better and worse algorithms but I think the meat of the rot is the expectation of an algorithm determining 90% of what you see.
LogicFailsMe
One could absolutely push algorithms that personalize towards what the user wants to see. I think LLMs could be amazing at this. But that's not the maximally profitable algorithm, so nobody does it.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
anonymouskimmer
Could someone use a third-party AI agent to re-curate their feeds? If it was running from the user's computer I think this would avoid any API legal issues, as otherwise ad and script blockers would have been declared illegal long ago.
> but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I've never used it, but yes this is what I expected. It would be better to have topical lists that users could manually choose to follow or block. This would avoid quite a bit of the "mean girl" selectivity. Though I suppose you'd get some weird search-engine-optimization like behavior from some of the list curators (even worse if anyone could add to the list).
Aurornis
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
graemep
> If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
That is really limiting though. I do not want to see Brexit ragebait in my threads, but I am quite happy to engage in intelligent argument about it. The problem is that if, for example, a friend posts something about Brexit I want to comment on, my feed then fills with ragebait.
My solution is to bookmark the friends and groups pages, and the one group I admin and go straight to those. I have never used the app.
ben_w
I got out of Twitter for a few reasons; part of what made it unpleasant was that it didn't seem to be just what I did that adjusted my feed, but that it was also affected by what the other people I connected to did.
Uehreka
> You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.
The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.
You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).
But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).
Aurornis
> The algorithm doesn’t show you “more of the things you engage with”,
That’s literally what the complaint was that I was responding to.
You even immediately contradict yourself and agree that the algorithm shows you what you engage with
> But you make one three word comment about Brexit and the algorithm goes up
> Now your feed is trash forever, unless you engage with content from another mainstream category
This is exactly what I already said: If you want to see some content, engage with it. If you don’t want to see that content, don’t engage with it.
Personally, I regret engaging with this thread. Between the ALL CAPS YELLING and the self-contradictory posts this is exactly the kind of rage content and ragebait that I make a point to unfollow on social media platforms.
fortran77
I use X. I have an enormouse blocklist and I block keywords. I found that I can also block emoji. This keeps my feed focused on what I want to see (no politics. Just technology, classical and jazz music, etc.)
catlover76
[dead]
coffeecoders
I actually think we’re overestimating how much of "losing our voice" is caused by LLMs. Even before LLMs, we were doing the same tweet-sized takes, the same medium-style blog posts and the same corporate tone.
Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
mewpmewp2
Yes, fully agreed. Most people producing content were always doing it to get quick clicks and engagement. People always had to filter things anyhow and you had to choose where you get your content from.
People were posting Medium posts rewriting someone else's content, wrongly, etc.
gregates
Also ironic is how the post about having a unique voice is written in one-sentence-paragraph LinkedIn clickbait style.
riazrizvi
Content recycling has become so cheap, effort-wise, it’s killed the business. Thank god.
coffeecoders
Yes. That particular content-farm business model (rewrite 10 articles -> add SEO slop -> profit) is effectively dead now that the marginal cost is zero.
I’m not mourning it.
null
acedTrex
I mean, if you typed something by your own hand it is in your voice. The fact that everyone tried to EMULATE the same corporate tone does not at all remove peoples individual ways of communicating.
coffeecoders
I’m not sure I agree with this sentiment. You can type something "by hand" and still have almost no voice in it if the incentives push you to flatten it out.
A lot of us spent years optimizing for clarity, SEO, professionalism etc. But that did shape how we wrote, maybe even more than our natural cadence. The result wasn’t voice, it was everyone converging on the safe and optimized template.
ori_b
If you chose to trade your soul to 'incentives', and replace incisive thought with bland SEO and professionalism -- you chose this. Your voice has become the bland language of business.
AstroBen
There's something unique about art and writing where we just don't want to see computers do it
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
randycupertino
I had a weird LLM use instance happen at work this week, we were in a big important protocol review meeting with 35 remote people and someone asks how long IUDs begin to take effect in patients. I put it in ChatGPT for my own reference and read the answer in my head but didn't say anything (I'm ops, I just row the boat and let the docs steer the ship). Anyone this bigwig Oxford/Johns Hopkins cardiologist who we pay $600k a year pipes up in the meeting and her answer is VERBATIM reading off the ChatGPT language word for word. All she did was ask it the answer and repeat what it said! Anyway it kinda made me sad that all this big fancy doctor is doing is spitting out lazy default ChatGPT answers to guide our research :( Also everyone else in the meeting was so impressed with her, "wow Dr. so and so thank you so much for this helpful update!" etc. :-/
mtlynch
>her answer is VERBATIM reading off the ChatGPT language word for word
How could it be verbatim the same response you got? Even if you both typed the exact same prompt, you wouldn't get the exact same answer.[0, 1]
[0] https://kagi.com/assistant/8f4cb048-3688-40f0-88b3-931286f8a...
[1] https://kagi.com/assistant/4e16664b-43d6-4b84-a256-c038b1534...
randycupertino
We have a work enterprise GPT account across the company.
grey-area
The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.
anonymouskimmer
A climate scientist I follow uses Perplexity AI in some of his YouTube videos. He stated one time that he uses it for the formatting, graphs, and synopses, but knows enough about what he's asking that he knows what it's outputting is correct.
An "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.
randycupertino
She read it EXACTLY as written from the ChatGPT response, verbatim. If it was her own unique response there would have been some variation.
anonymouskimmer
The one thing a cardiologist should be able to do better than a random person is verify the plausibility of a ChatGPT answer on reproductive medicine. So I guess/hope you're paying for that verification, not just the answer itself.
NortySpock
Or both the doctor and ChatGPT were quoting verbatim from a reputable source?
mewpmewp2
I would love to see true really good AI art. Right now the issue is that AI is not there where it by itself could produce actually good art. If we had to define art it would be kind of opposite of what LLMs produce right now. LLMs try to produce the statistical norm, while art is more so about producing something out of the norm. LLMs/AI right now if it wants to try to produce out of norm things, it will only produce something random without connections.
Art is something out of the norm, and it should make some sense at some clever level.
But if there was AI that truly could do that, I would love to see it, and would love to see even more of it.
It can be clearly seen, if you try to ask AI to make original jokes. These usually aren't too good, if they are good it's because they were randomly lucky somehow. It is able to come up with related analogies for the jokes, but this is just simple pattern matching of what is similar to the other thing, not insightful and clever observation.
oidar
And what's more is the suspicion of it being written by AI causes you to view any writing in a less charitable fashion. And because it's been approached from that angle, it's hard to move the mental frame to being open of the writing. Even untinged writings are infected by smell of LLMs.
kentm
Art, writing, and communication is about humans connecting with each other and trying to come to mutual understanding. Exploring the human condition. If I’m engaging with an AI instead of a person, is there a point?
There’s an argument that the creator is just using AI as a tool to achieve their vision. I do not think that’s how people using AI are actually engaging with it at scale, nor is it the desired end state of people pushing AI. To put it bluntly, I think it’s cope. It’s how I try to use AI in my work but it’s not how I see people around me using it, and you don’t get the miracle results boosters proclaim from the rooftop if you use it that way.
turtletontine
If the writer’s entire process is giving a language model a few bullet points… I’d rather them skip the LLM and just give me the bullet points. If there’s that little intent and thought behind the writing, why would I put more thought into reading it than they did to produce it?
anonymouskimmer
A person can be just as wrong as an LLM, but unless they're being purposefully misleading, or sleep-writing, you know they reviewed what they wrote for their best guess at accuracy.
tucnak
> There's something unique about art and writing where we just don't want to see computers do it
Speak for yourself. Some of the most fascinating poetry I have seen was produced by GPT-3. That is to say, there was a short time period when it was genuinely thought-provoking, and it has since passed. In the age of "alignment," what you get with commerical offerings is dog shite... But this is more a statement on American labs (and to a similar extent, the Chinese whom have followed) than on "computers" in the first place. Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer. (With added option of the reader playing one of the parts.) This will radically change how we think about textual form, and I cannot wait for compute to do so.
Re: modern-day slop, well, the slop is us.
Denial of this comes from a place of ignorance; let the blinkers off and you might learn something! Slop will eventually pass, but we will remain. This is the far scarier proposition.
WhyOhWhyQ
"inhabited by characters ACTUALLY living in the computer"
It's hard to imagine these feeling like characters from literature and not characters in the form of influencers / social media personalities. Characters in literature are in a highly constrained medium, and only have to do their story once. In a generated world the character needs to be constantly doing "story things". I think Jonathan Blow has an interesting talk on why video games are a bad medium for stories, which might be relevant.
tucnak
Please share! Computational literature is my main area of research, and constraints are very much in the center of it... I believe that there are effectively two kinds of constraints: in the language of stories themselves, as thing-in-itself, as well as those imposed by the author. In a way, authorship is incredibly repressive: authors impose strict limits on the characters, what they get to do, etc. This is a form of slavery. Characters in traditional plays only get to say exactly what the author wants them to say, when he wants them to say it. Whereas in computational literature, we get to emancipate the characters! This is a far-cry from "prompting," but I believe there are concrete paths forward that would be somewhat familiar (but not necessarily click) for game-dev people.
Now, there's fundamental limits of the medium (as function of computation) but that's a different story.
anonymouskimmer
> Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer.
So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
My idea of godhood is to first try to live up to a moral code that I'd be happy with if I was the creation and something else was the god.
If this isn't what you meant, then yes, choose your own adventure is fun. But we can do that now with shared worlds involving other humans as co-content creators.
hshdhdhj4444
The problem with the “your voice is unique and an asset” argument is what we’ve promoted for so long in the software industry.
Worse is better.
A unique, even significantly superior, voice will find it hard to compete against the pure volume of terrible non unique LLM generated voices.
Worse is better.
mkzet
The Internet will become truly dead with the rise of LLMs. The whole hacking culture within 90s and 00s will always be the golden age. RIP
A4ET8a8uTh0_v2
Maybe. Nature hates vacuum. I personally suspect that something new will emerge. For better or worse, some humans work best when weird restrictions are imposed. That said, yes, then wild 90s net is dead. It probably was for a while, but were all mourning.
bdangubic
I hacked in the 90s and 00s, wasn’t that great/golden if you took your profession seriously…
null
FranzFerdiNaN
There are still small pockets with actual humans to be found. The small web exists. Some forums keep on going, im still shitposting on Something Awful after twenty years and it’s still quite active. Bluesky has its faults but it also has for example an active community of scholars you can follow and interact with.
whitehexagon
Not quite dead yet. For me the rise of LLMs and BigTech has helped me turn more away from it. The more I find Ads or AI injected into my life, the more accounts I close, or sites I ignore. I've now removed most of my BigTech 'fixes', and find myself with time to explore the fun side of hacking again.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
johnwheeler
100%. I miss trackers and napster. I miss newgrounds. This mobile AI bullshit is not the same. I don't know why, but I hate AI. I consider myself just as good as the best at using it. I can make it do my programming. It does a great job. It's just not enjoyable anymore.
mentalgear
I've been thinking about this as well, especially in the context of historical precedents in terms of civilization/globalization/industrialization.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
mold_aid
>How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
Explain to me how "book printing" of the past "standardized communication" in the same way as LLMs are criticized for homogenizing language.
anonymouskimmer
I'm taking "same way" to be read as "authoritative", whether de facto or de jure. Basically by dint of people using what's provided instead of coming up with their own.
Everyone has the same few dictionary spellings (that are now programmed into our computers). Even worse (from a heterogeneity perspective), everyone also has the same few grammar books.
As examples: How often do you see American English users write "colour", or British English users write "color", much less colur or collor or somesuch?
Shakespeare famously spelled his own last name half a dozen or so different ways. My own patriline had an unusual variant spelling of the last name, that standardized to one of the more common variants in the 1800s.
https://en.wikipedia.org/wiki/History_of_English_grammars
"Bullokar's grammar was faithfully modelled on William Lily's Latin grammar, Rudimenta Grammatices (1534).[9] Lily's grammar was being used in schools in England at the time, having been "prescribed" for them in 1542 by Henry VIII.[5]"
It goes on to mention a variety of grammars that may have started out somewhat descriptive, but became more prescriptive over time.
leetrout
Hits close to home after I've caught myself tweaking AI drafts just to make them "sound like me". That uniformity in feeds is real and it's like scrolling through a corporate newsletter disguised as personal takes.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
Nudge to post more of my own mess this week...
logsr
In my view LLMs are simply a different method of communication. Instead of relying on "your voice" to engage the reader and persuade them of your point of view, writing with LLMs for analysis and exploration through LLMs, is about creating an idea space that a reader can interact with and explore from their own perspective, and develop their own understanding of, which is much more powerful.
ChrisMarshallNY
Not sure if it's an endemic problem, just yet, but I expect it to be, soon.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
aquariusDue
I just want to chime in and say I enjoy reading your takes across HN, it's also inspiring how informative and insightful they are. Glazing over, please never stop writing.
ChrisMarshallNY
Thanks so much!
BoredomIsFun
The posts sounds beige and AI-generated ironically.
In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.
truelson
It's still an editor I can turn to in a pinch when my favorite humans aren't around. It makes better analogies sometimes. I like going back and forth with it, and if it doesn't sound like me, I rewrite it.
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
dlisboa
It's ironic that https://substack.com/@davebarry uses a lot of AI-generated imagery. Maybe the death of vision is not exaggerated.
I don't judge, I'm not an artist so if I wanted to express myself in image I'd need AI help but I can see how people would do the same with words.
WD-42
Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.
tensegrist
i think the frontpage of hn has had at least one llm-generated blog post or large github readme on it almost every day for several months now
vladms
Tbh I prefer to read/skim the comments first and only occasionally read the original articles if comments make me curious enough. For now I never ended checking something that would seem AI generated.
grey-area
Many instagram and facebook posts are now llm generated to farm engagement. The verbosity and breathless excitement tends to give it away.
heltale
It’s pretty much all you see nowadays on LinkedIn. Instagram is infected by AI videos that Sora generates while X has extremist views pushed up on a pedestal.
codeflo
The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
thundergolfer
Ironically this post is written in a pretty bland, 'blogging 101' style that isn't enjoyable to read and serves just to preach a simple, consensus idea to the choir.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
exasperaited
True, but one of the least-explored problems with AI is that because it can regurgitate basic writing, basic art, basic music with ease, there is this question:
Why do it at all if I won't do better than the AI?
The worst risk with AI is not that it replaces working artists, but that it dulls human creativity by killing the urge to start.
I am not sure who said it first, but every photographer has ten thousand bad photos in them and it's easier if they take them at the beginning. For photographers, the "bad" is not the technical inadequacy of those photos; you can get past that in the first one hundred. The "bad" is the generic, uninteresting, uninspiring, underexplored, duplicative nature of them. But you have to work through that to understand what "good" is. You can't easily skip these ten thousand photos, even if your analysis and critique skills are strong.
There's a lot to be lost if people either don't even start or get discouraged.
But for writing, most of the early stuff is going to read much like this sort of blog post (simply because most bloggers are stuck in the blogging equivalent of the ten thousand photos; the most popular bloggers are not those elevating writing).
"But it looks like AI" is the worst, most reflexive thing about this, because it always will, since AI is constantly stealing new things. You cannot get ahead of the tireless thief.
The damage generative AI will do to our humanity has only just started. People who carry on building these tools knowing what they are doing to our culture are beneath our contempt. Rampantly overcompensated, though, so they'll be fine.
O_H_E
There was recently this link talking about AI slop articles on medium
https://rmoff.net/2025/11/25/ai-smells-on-medium/
He doesn't link many examples, but at the end he gives the example of an author pumping out +8 articles in a week across a variety of topics. https://medium.com/@ArkProtocol1
I don't spend time on medium so I don't personally know.
A4ET8a8uTh0_v2
I continually resist the urge to deploy my various personas onto hn, because I want to maintain my original hn persona. I am not convinced other people do the same. It is not that difficult to write in a way that avoids some tell tale signs.
jmkni
I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.