Why do AI models use so many em-dashes?
63 comments
·November 2, 2025lordnacho
cornonthecobra
This is mine as well, with the addition of books. If someone wanted to train a bot to sound more human, they would select data that is verifiably human-made.
The approachable tone of popular print media also preselects for the casual, highly-readable style I suspect users would want from a bot.
kubb
It also seems that LLMs are using them correctly — as a pause or replacement for a comma (yes, I know this is an imprecise description of when to use them).
Thanks to LLMs I learned that using the short binding dash everywhere is incorrect, and I can improve my writing because of it.
tim333
That kind of fits with Altman saying they put them in because users liked them (https://www.linkedin.com/posts/curtwoodward_chatgpt-em-dash-...)
I guess in the past if you'd shown me a passage with em dashes I'd say it looks good because I associate it with the New Yorker and Economist, both of which I read. Now I'd be a bit more meh due to LLMs.
spuz
According to the CEO of Medium, the reason is because their founder, Ev Williams, was a fan of typography and asked that their software automatically convert two hyphens (--) into a single em-dash. Then since Medium was used as a source for high-quality writing, he believes AI picked up a preference for em-dashes based on this writing.
hshdhdhehd
If medium was a source why doesnt AI models stop half way through their output and ask for subscription and/or payment?
spuz
The whole interview goes into that and talks about the benefits and costs of allowing search and AI crawlers access to Medium articles.
scrollaway
Give OpenAI a few more months :)
don_neufeld
[Founding CTO of Medium here]
It wasn’t just Ev - I can confirm that many of us were typography nuts ;)
Marcin for example - did some really crazy stuff.
https://medium.design/crafting-link-underlines-on-medium-7c0...
trvz
Too bad y’all weren’t UX nuts. Your platform is so hostile, I blocked it in Pihole.
don_neufeld
Oh we definitely were, I don’t know too many of the folks there these days, it’s been 12 years since I left.
Hostile? That’s definitely a take. Curious what you’re thinking there.
steve1977
> since Medium was used as a source for high-quality writing
That explains a lot…
iansteyn
It’s a real pity to me that em-dashes are becoming so disliked for their association with AI. I have long had a personal soft spot for them because I just like them aesthetically and functionally. I prided myself on searching for and correctly using em, en, and regular dashes, had a Google docs shortcut for turning `- - -` into `—` and more recently created an Obsidian auto-replacement shortcut that turns `-em` into `—`. Guess I’ll just have to use it sparingly and keep my prose otherwise human.
jasonvorhe
Don't change your behaviour because some corporations made questionable decisions.
Your readers won't care about the dashes as long as the texts read like they had human origins and you have something to tell.
keiferski
Unfortunately a lot of contests etc. are anti-AI usage without having a formal system for detecting it. In practice that means anyone using a lot of em-dashes will be flagged by a reviewer as AI-likely.
iamdamian
I would say that using a lot of em-dashes was always bad writing. You want to use them sparingly if you want them to have impact.
That said, yes, keep using them (and using them well!).
null
whynotmakealt
If I found em-dashes and other patterns like its just not X but Y and all the other things we correlate with AI, I might call a person using it.
I don't understand the purpose of using LLM's to write articles unless someone wants to be the middleman of slop and if that's the case, I'd rather cut middlemans and get slop directly from the AI models, instead of pasting the output of what chatgpt generated, give me the prompt and maybe temperature/other settings if need be to make it more reproducible but the prompt itself could be enough smh
I am not saying you should change your writing style, but at the same time, you have to understand, if someone writes like AI, Chances are that we are too tired of looking too deep into it to find if its written by AI or not, we are tired of it & so you must understand our or anybody's frustration if they call out someone's writing as AI.
For those using AI to write articles/etc. : If you are passionate about something, write about it, write what you want, how you want and you will be proud. But if you use LLM, you will constantly be called upon and frankly, it reduces the purpose of writing.
For code, there is a debate that code is just an means to an end (which is to do stuff like scripts etc.) but there is no end to writing, for what? for more views/etc., there is no point in getting such attention or anything considering it would just be negative attention if I or anyone found AI writing.
Not sure why people use AI text generation for articles etc. Idk.
This is my alt but when I had first started out on HN, I thought my english was fine but then somebody pointed it out and I try to fix my grammar and now its second nature to me writing.
I would be curious to know the reasons as to why people write text stuff with AI in the first place. It doesn't make sense to me since the other side would use their slop to counter your slop, at that point just create a tldr post, why strech an article in more words than unnecessary (I feel like I also write a lot of filler words / yap personally but alright, atleast you know a human is writing this), I don't get the point of writing longer if you aren't even writing it, is it to get SEO or, is the end goal money like all things?
DonHopkins
Well if you write paragraphs of redundant repetitive parenthetical text yourself (like I tend to do), that meanders around and repeats the point again and again (oh there I go again), like both you and I obviously do (and I'm doing now), then LLMs can be useful for condensing and sharpening it.
For example, your post could have been just one paragraph and said the same thing. Do you purposefully write so verbosely as a virtue signal of authenticity?
And no, readers can't always ask the LLM to reproduce the same slop, because they don't have the verbose, redundant (there I go again) original source text that it's condensing. And even if they did, they would not bother reading it, because it's tl;dr.
Nobody wants to read pages of repetitive human generated slop, either.
PS:
>I thought my english was fine but then somebody pointed it out and I try to fix my grammar and now its second nature to me writing.
Since you asked for somebody to point it out:
Use it's when it's a contraction for "it is" or "it has," and use its when it's a possessive pronoun showing ownership. A helpful trick is to try replacing the word with "it is" or "it has" in the sentence; if it still makes sense, use "it's".
Full disclosure, in case you can't tell: the paragraph above was LLM generated. Did you find it helpful, was it tl;dr, or did you dislike "its" style?
krzrak
I feel you... For 30+ years of my life I prided myself for writing without typos and other mistakes (without autocorrect), using lots of bullet points, dashes, and words such as "delve into" or "underscore".
Now I find myself intentionally adding typos and other msitakes, and using less sophisticated language, just to not be accused of using AI.
hdgvhicv
It’s been about 30 years since prose editors like word started underlining spelling mistakes in red. I don’t get typos when writing formal text in a keyboard. One handed on a touch screen phone with “auto correct” causing issues is another thing, but not for published articles.
topaz0
The distinctiveness of LLM language comes from overuse of specific words, not because it has a particularly sophisticated vocabulary. Some of the words it overuses may be considered sophisticated by some people, but that's not what makes it identifiable (or what makes it grating). It's still not hard to distinguish your voice from LLMs by being thoughtful about style at all.
(Edit: corrected (unintentional) typo)
TheOtherHobbes
It's not just [thing], it's [more dramatic thing.]
You can customise the default style over an impressive range. Most people don't, so most AI writing is distilled essence of Failed LinkedIn Marketer, even when that style conflicts hilariously with the content.
matsemann
I don't mind that in a "proper" text where it's actually useful and fun to read something with a flair. But maybe it has always irked people in short form (forum comments etc), but they've never just called it out until now? I do sometimes read something that gives me an "iamverysmart" feeling, as if the author used a thesaurus to find a synonym for half the words to sound clever but it just makes the whole thing incomprehensible.
TheOtherHobbes
Americans famously have a median 6th grade reading age, so words like "delve" and "perspicacity" aren't going to win friends and influence people.
Ironically, AI writing is too literate. It reads like clunky pastiche to literate readers, but it's still using words and constructions less literate readers haven't seen before.
topaz0
Part of it is the guilt-by-association with the other bad writing habits of LLMs, but I think a lot of it is just that LLMs genuinely overuse them, and that homogeneity is grating just like it's grating when you notice a text reuses a particular noticeable word or whatever. As a fellow em-dash user, I have sometimes noticed myself overusing them too, and revised accordingly, starting well before the proliferation of this particular cancer.
So I think you can keep using em-dashes without being associated with LLMs as long as you reserve them for particularly effective/tasteful occasions.
nandomrumber
I agree, parentheses are not only used incorrectly in a lot of online writing, they’re also ugly.
lm28469
While you're automated out of your dashes people are automated out of their jobs, relax you'll be ok
eastbound
Cmd + “-“ = –
Cmd + Shift + “-“ = —
Let’s spread the word until everyone fancy uses them, and then those who criticize text for coming from LLMs will be ridiculed by our ridiculous skills.
Etheryte
That's interesting, for me those shortcuts are with option, not command. On my laptop, the first shortcut you wrote down is used to zoom out.
latexr
It’s ⌥ instead of ⌘, and those exact shortcuts depend on keyboard layout. You posted the US version, but others reverse the em and en dashes.
withinboredom
Or on Linux with the compose key, it is also different.
sixhobbits
I would think the most obvious explanation is that they are used as part of the watermark to help OpenAI identify text - i.e. the model isn't doing it at all but final-pass process is adding in statistical patterns on top of what the model actually generates (along with words like 'delve' and other famous GPT signatures)
I don't have evidence that that's true, but it's what I assume and I'm surprised it's not even mentioned as a possibility.
When I studied author profiling, I built models that could identify specific authors just by how often they used very boring words like 'of' and 'and' with enough text, so I'm assuming that OpenAI plays around with some variables like that which would much harder to humans to spot, but probably uses several layers of watermarking to make it harder to strip, which results in some 'obvious' ones too.
constantius
Obvious watermarking that consistently gets a lot of hate from vocal minorities (devs, journalists, etc.) would probably be simply removed for the benefit of those other layers you mention.
But the watermarking layers is a fascinating idea (and extremely likely to exist), thanks!
xandrius
Honestly the most obvious explanation is that the training set has a lot of them, not some sort of watermarking conspiracy. Occam's razor at its best.
null
xg15
The "book scanning" hypothesis doesn't sound so bad — but couldn't it simply be OCR bias? I imagine it's pretty easy for OCR software to misrecognize hyphens or other kinds of dashes as em-dashes if the only distinction is some subtle differences in line length.
0xbadc0de5
My first thought was watermarking. Same for it's affinity for using emojis in bullet lists.
keiferski
I am no grammarian, but I feel like em-dashes are an easy way to tie together two different concepts without rewriting the entire sentence to flow more elegantly. (Not to say that em-dashes are inelegant, I like them a lot myself.)
And so AI models are prone to using them because they require less computation than rewriting a sentence.
spidersouris
What we also learned after GPT-3.5 is that, to circumvent the need for new training data, we could simply resort to existing LLMs to generate new, synthetic data. I would not be surprised if the em dash is the product of synthetically generated data (perhaps forced to be present in this data) used for the training of newer models.
iddan
I’m now reading Pride and Prejudice (first edition released in 1813) and indeed there are many em dashes. It also includes language patterns the models didn’t pick up (vocabulary, to morrow instead of tomorrow)
moffkalast
I'm gonna start calling it yes terday.
keiferski
Yester-day feels plausible and kind of elegant.
hdgvhicv
Yesterday’s yes terday is today’s yes today.
hshdhdhehd
Yes. Turd day.
Etheryte
Another reason I think attributes to it at least partially is that other languages use em-dashes. Most people use LLMs in English, but that's not the only language they know and many other languages have pretty specific rules and uses for em-dashes. For example, I see em-dashes regularly in local European newspapers, and I would expect those to be written by a human for most part simply because LLM output is not good enough in smaller languages.
Fricken
Historically I would see far more em-dashes in capital "L" literature than I would in more casual contexts. LLMs assign more weight to literature than to things like reddit comments or Daily Mail articles.
Gigachad
I think this is most of it. The most obvious sign of AI slop is mismatched style with the medium. People are posting generated text to Reddit which reads like a school essay or linkedin inspirational post. Something no one did before. So even though the style is not unprecedented, it’s taken out of its original context.
My pet theory is similar to the training set hypothesis: em-dashes appear often in prestige publications. The Atlantic, The New Yorker, The Economist, and a few others that are considered good writing. Being magazines, there's a lot of articles over time, reinforcing the style. They're also the sort of thing a RLHF person will think is good, not because of the em-dash but because the general style is polished.
One thing I wondered is whether high prestige writing is encoded into the models, but it doesn't seem far fetched that there's various linkages inside the data to say "this kind of thing should be weighted highly."