Skip to content(if available)orjump to list(if available)

It's rude to show AI output to people

phito

I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.

SoftTalker

Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:

> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?

righthand

Clippy is rolling in his grave.

righthand

Seriously you should respond to the slop in the email and waste your coworkers time too.

“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”

AlecSchueler

That wastes your own time as well though.

null

[deleted]

lxgr

"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.

Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."

pyman

I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.

People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.

shreezus

LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.

“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”

vouaobrasil

Best thing you can do is quit LinkedIn. I deleted my account immediately once I first noticed AI-generated content there.

stevekemp

I guess that makes sense, unless you're single. LinkedIn is the new tinder.

quietbritishjim

> now the interface deliberately suggests AI-generated responses to posts

This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.

distantprovince

This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.

j45

AI Content that doesn't appear AI today will have to be the type that doesn't appear like AI in 1, 2 years.

Folks who are new to AI are just posting away with their December 2022 because it's new to them.

It is best to personally understand your own style(s) of communication.

herval

have you tried sharing that feedback with them?

one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.

Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.

null

[deleted]

anal_reactor

I love it because it allows me to filter out people not worth my time and attention beyond minimal politeness and professionalism.

benatkin

echelon

Wow. What a good giveaway.

I wonder what others there are.

I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.

I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.

Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.

[1] most recently https://news.ycombinator.com/item?id=44482876

Velorivox

I like to use em-dashes as well (option-shift-hyphen on my macbook). I've seen people try to prompt LLMs to not have em-dashes, I've been in forums where as soon as you type in an em-dash it will block the submit button and tell you not to use AI.

Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.

furyofantares

Maybe I'm misunderstanding - but I don't think LLM's say cow-orkers. Or is that what you mean?

Buttons840

Use two dashes instead of an actual em dash. ChatGPT, at least, cannot do the same--it just can't.

scarface_74

How is that a “giveaway”? The search turns up results from 7 years ago before LLMs were a thing? More than likely it’s auto correct going astray. I can’t imagine an LLM making that mistake

lupusreal

Give away for what, old farts? That link contains a comment citing the jargon file which in turn says that the term is an old Usenet meme.

moomoo11

Why? AI is a tool. Are their messages incorrect or something? If not who cares, they’re being efficient and thus more productive.

Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…

I really hope people like this with holier than thou attitude get filtered out. Fast.

People who don’t adapt to use new tools are some of the worst people to work around.

distantprovince

> If it’s slop or they have incorrect information in the message, then my bad, stop reading here.

"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.

moomoo11

That’s on them, I said what I wanted to.

Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.

pyman

Didn't our parents go through the same thing when email came out?

My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."

Change is inevitable. Most people just won't like it.

A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years.

aidos

I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.

drweevil

That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.

pyman

Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

j45

Some do put their words into the LLM and clean it up.

And it stays much closer to how they are writing.

moomoo11

The prompt is theirs.

unyttigfjelltol

Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.

distantprovince

I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.

kenanblair

Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.

I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.

conartist6

Just be a robot. Sell your voice to the AI overlords. Sell your ears and eyes. Reality was the scam; choose the Matrix. I choose the Matrix!

threatofrain

I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.

So have your Siri talk to my Cortana and we'll work things out.

Is this a colder world or old people just not understanding the future?

conartist6

It's demonstration by absurdity that that is not the future. You're describing the collapse of all value.

j45

One thing is it's less about change. It's more about quality vs quantity and both have their place.

vouaobrasil

A lot of the reason why I even ask other people is not to get a simple technical answer but to connect, understand another person's unexepected thoughts, and maybe forge a collaboration –– in addition to getting an answer of course. Real people come up with so many side paths and thoughts, whereas AI feels lifeless and drab.

To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.

gharper

It’s the conversational equivalent of “Let me google that for you”.

ghjnut

It is, which I'd argue has a time and a place. Maybe it's more specific to how I cut my teeth in the industry but as programmer whenever I had to ask a question of e.g the ops team, I'd make sure it was clear I'd made an effort to figure out my problem. Here's how I understand the issue, here's what I tried yadda yadda.

Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.

It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.

Arainach

Instead of spending this time, it is faster, simpler, and more effective to phrase these questions in the form "have you checked the docs and what did they say?"

jddj

It's the conversational equivalent of an amplification attack

accrual

I remember reading about someone using AI to turn a simple summary like "task XYZ completed with updates ABC" into a few paragraphs of email. The recipient then fed the reply into their AI to summarize it back into the original points. Truly, a compression/expansion machine.

MattGaiser

I think the issue is that about half the conversations in my life really shouldn't happen. They should have Googled it or asked an AI about it, as that is how I would solve the same problem.

It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.

marliechiller

> "For the longest time, writing was more expensive than reading"

Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing

lxgr

Yes, just like painting a picture used to be extremely time-consuming compared to looking at a scene. Today, these take roughly the same effort.

Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.

That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.

hks0

> "I vibe-coded this pull request in just 15 minutes. Please review" > > Well, why don't you review it first?

My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.

(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).

lukevp

Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.

Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.

This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.

drewbug01

> Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.

I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?

For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.

distantprovince

100% real life is much more grim. I can only hope we'll somehow figure it out.

I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?

lxgr

Trust is earned in drops and lost in buckets. If somebody asks for my time to review slop, especially without a disclaimer, I'll simply not be reviewing their pull requests going forward.

craftkiller

Show their manager?

gigatree

Someone telling you about a conversation they had with ChatGPT is the new telling someone about your dream last night (which sucks because I’ve had a lot of conversations I wanna share lol).

accrual

I think it's different to talk about a conversation with AI versus just passing the AI output to someone directly.

The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".

For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.

kelseyfrog

This is the same sentiment I have.

It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.

If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.

toast0

Eh. It's more like I asked my drunk uncle, and he sounded really confident when he told me X.

dlevine

If someone uses AI to generate an output, that should be stated clearly.

That is not an excuse for it being poorly done or unvetted (which I think is the crux of the point), but it’s important to state any sources used.

If i don’t want to receive AI generated content, i can use the attribution to filter it out.

MrGilbert

It gets interesting once you start a discussion about a topic with someone who had ChatGPT doing all the work. They often do not have the same in-depth understanding of what is written there vs. someone who wrote it themselves. Which may not come as a surprise, but yet - here we are. It‘s these kind of discussions I find exhausting, because they show no honesty and no interest by the person I'm interacting with. I usually end these conversations quickly.

conartist6

AI doesn't leave behind the people who don't use it, it leaves behind the people who do. Roko's Reverse Basilisk?

MrGilbert

I never heard of Roko's Basilisk before, and now I entered a disturbing rabbit hole. Peoples' minds are... something.

I mean, it's basically cheating. I get a task, and instead of working my way through the task, which might be tedious, you take the shorter route and receive instant gratification. I can understand how that is causing some kind of rush of endorphines, much like eating a bar of chocolate will. So, yeah - I would agree, altough I do not have any studies that support the hypothesis.

smithbits

Yes. I just had a bad experience with an online shop. I got the thing I ordered, but the interaction was bad so I sent a note to their support email saying "I like your company, but I recently had this experience that felt icky, here's what happened" and their "AI Agent Bot" replied with a whole lot of platitudes and "Since you’ve indicated no action is needed and your order has been placed, we will be closing this ticket." I'm all for LLM's helping people write better emails, but using them to auto-close support tickets is rude.

oldge

Seems like a strongly coupled set of events that leaks their internal culture. “Customers are not worth the effort”.

varjag

I recently had a non-technical person contest my opinion on a subtle technical issue with ChatGPT screenshots (free tier o4) attached in their email. The LLM wasn't even wrong, just that it had the answer wrapped in customary platitudes to the user and they are not equipped to understand the actual answer of the model.

lvl155

While I understand this sentiment, some people simply suck at writing nice emails or have a major communication issue. It’s also not bad to run your important emails through multiple edits via AI.

GPerson

Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.

lxgr

Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.

lvl155

If you work in a big enough organization, they have AI sandboxes for things like this.

stefan_

Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.

Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.

Al-Khwarizmi

Or are non-native speakers. LLMs can be a godsend in that case.

adamtaylor_13

The article clearly supports this type of usage.

z3c0

Is it too much to ask them to learn? People can have poor communication habits and still write* a thoughtful email.

Al-Khwarizmi

Maybe yes, it's too much?

I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?

And being non-native with a good English level is nothing compared to people who might have autism, etc.

z3c0

I'm a native English speaker who asks myself the same questions on most emails. You can use LLM outputs all you want, but if you're worried about the tone, LLM edits drive the tone to a level of generic that ranges from milquetoast, to patronizing, to outright condescending. I expect some will even begin to favor pushy emails, because at least it feels human.

yoyohello13

Seriously. If you can’t spend effort to communicate properly, why should I expend effort listening?

deadbabe

Then they shouldn’t be in jobs or positions where good communication skills and writing nice emails are important.

scarface_74

I work with a lot of people who are in Spanish speaking countries who have English as a second language. I would much rather read their own words with grammatical errors than perfect AI slop.

Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.

KronisLV

> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

I think it all goes to crap when there is some economic incentive: e.g. blogspam that is profitable thanks to ads and anyone that stumbles upon it, alongside being able to generate large amounts of coherent sounding crap quickly.

I have seen quite a few sites like that in the first pages of both Google and DuckDuckGo which feels almost offensive. At the same time, posts that promise something and then don't go through with it are similarly bad, regardless of AI generated or not.

For example, recently I needed to look up how vLLM compares with Ollama (yes, for running the very same abominable intelligence models, albeit for more subjectively useful reasons) because Qwen3-30B-A3B and Devstral-24B both run pretty badly on Nvidia L4 cards with Ollama, which feels disappointing given their price tags and relatively small sizes of those models.

Yet pretty much all of the comparisons I found just regurgitated high level overviews of the technologies, like 5-10 sites that felt almost identical and could have been copy pasted from one another. Not a single one of those had a table of various models and their tokens/s on a given bit of hardware, for both Ollama and vLLM.

Back in the day when nerds got passionate about Apache2 vs Nginx, you'd see comparisons with stats and graphs and even though I wouldn't take all of those at face value (since with Apache2 you should turn off .htaccess and also tweak the MPM settings for more reasonable performance), at least there would sometimes be a Git repo.