Skip to content(if available)orjump to list(if available)

You sent the message, but did you write it?

sepositus

I just had an interesting experience this morning with my oldest child. He wanted to write a letter to the Splatoon development team to show his appreciation for the game. He printed it out and brought it to me with a proud expression on his face. It took about five seconds to catch the AI smell.

We then proceeded to have a conversation about how some people might feel if an appreciation letter was given to them that was clearly written by AI. I had to explain how it feels sort of cold and impersonal which leads to losing the effect he's looking to have.

To be honest, though, it really got me thinking about what the future is going to look like. What do I know? Maybe people will just stop caring about the human touch. It seems like a massive loss to me, but I'm also getting older.

I let him send the letter anyways. Maybe in an ironic twist an AI will respond to it.

klabb3

> Maybe people will just stop caring about the human touch.

No they won’t. This has happened many times in the past already with live theatre -> movies, live music -> radio etc. They don’t replace, but break into different categories where the new thing is cheap and abundant. When a corporation writes you a shit letter with ”We miss you, Derek” all reasonable people know what they’re looking at.

Look, it’s about basic economics. It doesn’t matter how ”good” the generated song for someone’s birthday is. What matters is the time, money and effort. In some cases writing a prompt for one-time use is not bad. If you’re generating individual output at scale without any human attention, nobody will appreciate the ”gesture”.

What bothers me is the content farm for X shit-startups and tech-cos thinking they’re replacing humans without side effects. It’ll work just as well as those fake signatures by the CEO in junk mail: it’ll deceive only for a short time, and maybe older people who may be permanently screwed. It’ll just be yet another comms channel saturated with spam, which is entirely fungible from the heaps of all other spam. A classic race to the bottom.

fragmede

Let us "delve" into this topic. The problem is it's all just words, and school children have figured out they can just ask ChatGPT to use the diction expected of a 13 year old child, and have it rewrite their essay without any of the old tells, like the word "delve", leaving us all none the wiser. Meanwhile, the difference between community theater and a movie, or live music vs recorded is obvious to even the most casual observer. Thus, this time it's different because other than a few particular word choices, and a bit of structure, we're left guessing just how much ChatGPT was used. Was it zero? Was it used as a more advanced Google Search? did it write the whole thing? Was there any editing of ChatGPT's output?

What do you do when we can no longer tell the difference?

klabb3

This has also happened before. For instance, when audio recording equipment became better concerts you can’t obviously tell if lip sync was used, even if amplifiers existed before. Similar with autotune. The signs are hard to spot, and with single interaction you just don’t really assume anything (like a superposition). With repeat interactions, you can start to paint a picture.

You are entirely right though that people will slip under the radar for a while. But it’ll only be a matter of time until a personal cold email means absolutely nothing again, simply because the volume of them will be insane.

jraph

> Maybe people will just stop caring about the human touch

I doubt it. We human beings seem intrinsically motivated and enthusiastic about human connections. I believe we are wired like this. I know things changesbut I would need some strong evidence before even playing with the idea that we'll stop caring about the human touch.

Now, as much as I hate AI, that doesn't necessarily mean AI-free. Or even handwritten. It just needs to be some human touch. I would enjoy a handwritten letter but wouldn't mind an email at all. But maybe someone else would find it lazy and tasteless, just as I would find an ai generated text lazy and tasteless.

Maybe the prompt you can guess the person who sent you some ai-generated text used can already be perceived as some human touch. Maybe there is a threshold though.

Now, could it be that your child wanted to impress you with a perfectly written letter, or even with their ai prompt mastering?

Anyway, good anecdote, good perspective, good for you to have had the conversation and let them proceed anyway. Thanks for sharing.

bathtub365

My concern is that people making these technologies do not come across as valuing human connection as they seek to replace it with AI while profiting at the same time. Whether it’s what they really believe or a lie to achieve their financial goals doesn’t really change the outcome.

sepositus

> I believe we are wired like this

Absolutely, and I think because of this we'll never see the desire go away completely. However, I'm imagining some dystopian future where human touch is so rare that people _forget_ how much it means to them. It's like scrolling through the endless slop of Netflix and then coming to some rare gem of a film where you're reminded what genuine art is.

krick

I don't think that people forget, but it definitely gets normalized. It's disgusting, but many people obviously embrace it, and since to order ChatGPT to do some lame stuff for them is by design orders of magnitude faster than actually create something, the internet is getting filled up by lame stuff every passing minute now.

But it's not like this only happens because of LLMs. If you worked in corporate culture you most definitely received some automated HR emails congratulating you with spending half of your life at the workplace, or something like that. I always felt almost insulted by these, they are literally just spam at best. It's kinda mocking: these are generic depersonalized texts that no one actually wrote for you, yet they always speak about "gratitude", about you being "valued" and such. In fact, it's the only thing they are meant to express: you being valued. It's so cynical.

But, I mean, it's just me. Ostensibly, these folks in HR department do know their job? Maybe most people don't feel like vomiting when they get these emails? Maybe it brings them joy? I never stopped wondering about that. I cannot just ask the closest coworkers, because of course they feel the same as me. But maybe there are other ones? Another social bubble, where this thing is normal, and it is bigger than mine?

Anyway, everyone is kinda used to it. What I am trying to say is that the phenomena is not entirely new, and LLMs don't change the essence of it. Even back when people sent paper mail to each other, I remember these pre-printed birthday/christmas cards, which are ok, because the entire point is that they are not automated and that you remember to send it to someone, yet it was always considered a bit of a poor taste to not add a sentence of yours by hand.

disambiguation

There's the tool and then there's how you use it. We're all still learning how to live with rapidly changing tech, but it sounds like your kid tried to pass off someone (something) else's work as their own. Cold and impersonal is a problem too, but this situation touches on ethical concepts like fraud and deception by omission - not to make a big deal out of a kid's fan letter :) but seems like an opportunity to teach a moral lesson too.

xandrius

Handwritten letters are still better than typed. That's why we still get authors to actually sign their book and not just put a print of it.

So, no, there is no evidence that AI will change stuff. We had canned responses and template answers for a long time but people still like talking to a real human being.

P.S. I think you should have told them to write a thank you letter themselves as a fun game to compare with the AI one and send that one instead.

perching_aix

Maybe this is a generational difference, but I really don't like handwritten anything. Something being handwritten doesn't evoke anything inside me - if anything, it only brings frustration, since having to decipher someone's handwriting (especially mine) can be no mean feat. There are also countless examples of cards and such with blatantly printed-on signatures, and there are signature plotting machines (autopens [0]) that further make automated signatures impossible to tell apart.

AI has already changed stuff. I have already seen several related examples of distasteful AI use in corporate settings. One example was management promising that feedback received during a townhall will be reviewed, only to then later proudly announce that they AI-summarized it. I'll readily admit that doing that is actually a very sensible use of AI, just maybe the messaging around it should have been a bit less out of touch. Another example was my coworker expressing his gratitude to the team, while simultaneously passing the milestone of producing more than 10 consecutive words of coherent English for the first time in his life. He was awfully proud of it too.

And to finish it off, talking to real human beings on the internet is increasingly miserable by the day. Without going too far off into the weeds, let me give you a practical, older example. I've participated in a Discord server of a FOSS project, specifically in their support channels, for a couple years - walked away a very different person, with great appreciation for service workers. I'm sure the people coming there loved being able to torment, I mean ask help from, real human beings. By the end, this feeling was very much not reciprocated. I was not alone in this either of course, and the mask would fall off of people increasingly often. Those very real human beings looking for help were not too happy either, especially when said masks fell off. So it was mostly just miserable for everyone involved. AI can substitute in such situations very handily, and everyone is honestly plain better off. Having to explain the same thing over and over to varyingly difficult people is not something a lot of people are cut out for, but the need is ever present and expanding, and AI has zero problems with filling those shoes. It can provide people with the assistance they need and deserve. It can provide even those with that help that do not need it, nor deserve it. Everyone's happier.

We've concocted a lot of inhuman(e) systems and social dynamics for ourselves over time. I have some skepticisms towards the future of AI myself, but it has a very legitimate chance of counteracting many of these dynamics, and restoring some much needed balance.

[0] https://en.wikipedia.org/wiki/Autopen

BrandoElFollito

I grew to appreciate AI patience.

When coding in a new environment, I like to go fast and break things - this is how I learn best (this is not a good way in general, but works well for me in my amateur dev).

I ask ChatGPT questions that would drive me crazy because they are a bit chaotic, a bit repetitive and give the impression of someone chaotic and slightly dumb (me, the asker, not the AI).

I worry that with time people may start to interact with other people the same way and that would be atrocious.

mulmen

I’m glad you let him send his message.

AI is the future whether you like it or not. Teaching him to use that tool effectively will serve him far better than shaming him for engaging the world in a way you find uncomfortable but is acceptable to society.

Consider if you would prefer he write the letter by hand to give the script that literal human touch. If not why is it ok for the computer to make the letters but not the words?

In this case the meaningful gesture is sending the message at all. He asked the AI to do a thing. That was his idea. AI just did the boring work of making it palatable to humans.

Much like driving and everything else automation takes away writing is something most people are profoundly bad at. Nothing is lost when an AI generates a message a human requested.

AaronAPU

The meaningful gesture isn’t clicking a button and pressing send. The meaningful gesture is taking time out of your day where you are focused and authentically thinking about the person and expressing those positive thoughts as they come with your own words.

It is a very sad and cynical view to equate these very different things.

swat535

I'm a bit conflicted here.. I think we're mixing up the tool with the _intent_ behind it.

To me, this feels less like outsourcing creativity and more like using a writing assistant to shape your thoughts. Kind of like how we all rely on spellcheck or Grammarly now without thinking twice. People were saying the same thing back then too, that tools were "diluting" writing.

I personally don't see the harm. Not everyone is a native English speaker.

metalman

AI is creating a now, that is dull, productive, borring, predictable, profiable, and alluring in a smug snide gotcha kinda way. Which would be fine, except that it is doomed to be self referential and will force a universal adoption, rendering the whole thing a fancy spambot that consumes 25% of the worlds energy budget. And the statement "like it or not" keeps some very uggly company in it's assosiations. clumsily written by someine with a phone useing one finger, and zero reconsideration or evaluation of whatever it is I just wrote hope you like it:)

ash_091

Last week I got an email from a manager about the number of free beverage taps in our office being reduced.

They'd clearly dumped the memo they got about the reduction into some AI with a prompt to write a "fun" announcement. The result was a mess of weird AI positivity and a fun "fact" which was so ludicrous that the manager can't have read it before sending.

I don't mind reading stuff that has been written with assistance from AI, but I definitely think it's concerning that people are apparently willing to send purely AI generated copy without at least reviewing for correctness and tone first.

pizzafeelsright

Tone policing? I'm fine with it although I too got an email from corporate about an event with the same type of fun energy. My impression changed of the event as it was no longer personal but mechanical.

There's always been some innate ability to recognize effort and experience. I don't know the word for it but looking at a child's or experienced artist drawing you just know if they put in minimal or extra effort.

techjamie

The funny thing about AI being used to write emails is that, in my opinion, writing emails is a terrible use for it. I've tried having it help me word things in emails because I tend to sound a bit stand-offish in writing, but getting the AI to help with that just tends to make me sound like some kind of pretentious HR manager, and nobody I'm emailing would believe I would write like that.

switch007

My company would have spun that like

"We are excited to announce we are supporting our family in their health kick journey. To support them, we have taken the difficult decision to reduce the number of beverages available. We remain fully committed to unlimited delicious tap water, free of charge!"

krick

That's kinda the point. As silly as what you wrote is, it's still a whole level above what ChatGPT can make up. The fact, that it produces human-like text doesn't make it any good, but somewhere there is a manager (and I bet he is not the only one) who just uses it to generate some nonsense announcements and gets paid for his work. Maybe he'll even get a promotion because of how effective he is.

Terr_

> That’s when it dawned on me: we don’t have a vocabulary for this.

I'd like to highlight the words "counterfeiting" and "debasement", as vocabulary that could apply to the underlying cause of these interactions. To recycle an old comment [0]:

> Yeah, one of their most "effective" uses [of LLMs] is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."

> Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.

> In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."

[0] https://news.ycombinator.com/item?id=41675602

AnimalMuppet

Debasement (in the currency sense). That's exactly what it is.

And then you get to Gresham's Law: "Bad money drives out good" (that is, drives it out of circulation)...

SoftTalker

I like chatshit —- bullshit, but written by AI.

mcherm

Oh, now THAT one seems likely to actually catch on!

doctorhandshake

Ironically (and perhaps proving the author’s point), when reading I couldn’t help feeling like this was at least AI-assisted writing, if not prasted directly.

jstanley

A big giveaway is that for most of the post the author uses hyphens instead of em-dashes, obviously because em-dashes are too difficult to type, but then uses em-dashes in a handful of places that sound exactly like the kind of thing ChatGPT would say.

Terr_

> A big giveaway is that for most of the post the author uses hyphens instead of em-dashes

I hear this "tip" a lot, and I question whether it's statistically-meaningful.

After spending several decades learning the right ways—like ALT+0151 on Windows—it seems deeply unfair that people are going to mischaracterize care and attention to detail as "fake".

jstanley

In this case I wasn't trying to say that em-dashes themselves are evidence of AI use. I was trying to say that mixing incorrect hyphen usage with correct em-dash usage is evidence of AI use.

But...

Using em-dashes is a signal. It's not a smoking gun, but text that uses em-dashes is more likely to be AI-generated than text that doesn't!

Similarly, text that consistently uses correct spelling and punctuation is more likely to be AI-generated than text that doesn't.

So - yeah - if you use em-dashes your writing looks more like AI wrote it.

But that’s not a bad thing—it means your writing has the same strengths: clarity, rhythm, and elegance. AI learned from the best, and so did you.

L-4

In this case the author mixes em dashes with hyphens surrounded by space. Both fine on their own, but it seems unlikely that someone with the attention to detail to use em dashes is going to be inconsistent here.

agentultra

Makes me wonder why people think I want to read what they send me if they haven’t even bothered to write it.

satisfice

This is a big problem, but we all know the solution— cease taking anyone’s writing seriously, unless they develop a reputation for natural writing (not using AI in their writing).

This is what we will all do. We all are spam filters now.

neom

I wonder if it's because my writing has always been so imperfect because of dyslexia/etc so I basically really couldn't care less, for me anyway least message hunting I have to do the better, also if it means someone doesn't sit and hmm and haa all day over sending a 4 sentence email to their boss because they're having a bad imposter syndrom day, who cares. ALSO... what to do about the fact you can't unread the AI?? also.... proof is in the pudding. oh, and also I emailed dang the other day, I wanted to make a bunch of intersecting fine points so the email got kinda contrived, I gave it to chatgpt but instead of replacing my email, I send my email and included the chatgpt share link for him, he thanked me for leaving it in my own voice.

A4ET8a8uTh0_v2

It is interesting. It is interesting in several different ways too:

- The timing is interesting as Altman opened US branches of his 'prove humanness' project that hides the biometrics gathering effort - The problem is interesting, because on HN alone, the weight of traffic from various AI bots seems to have become a known ( and reported ) issue - The fact that some still need to be convinced of this need ( it is a real need, but as first point notes, there are clear winners of some of the proposals out there ) resulting in articles like these - Interesting second and third order impact on society mentioned ( and not mentioned ) in the article

Like I said. Interesting.

klabb3

It’s futile. The first thing people will do if the ”written by a human” crypto-signature takes off is to wire it up with LLMs. You can’t make any reasonable guarantees on authorship unless you change the entire hardware stack and tack on loads of DRM.

Even if that happens, and say Apple integrates sigs all the way down through their system UI keyboards, secure enclaves, and TPM, you think they’re going to conform to some shitcoin spec? Nah man, they’ll use their own.

pixl97

>You can’t make any reasonable guarantees on authorship unless you change the entire hardware stack and tack on loads of DRM.

Even then you can't trust it. Companies write DRM and tend to have actual humans run the place. If the government where these humans live decides to point guns at them and demand access, most humans are going to give up the key before they give up their life.

A4ET8a8uTh0_v2

Oddly, this is the weirdest thing about all the 'its llms all the say down'. The problem starts with user trusting the provider to be dumb pipes. The moment they become smart pipes ( and one could argue we are there already ), all bets are off, because you have no comfort that even if you sent the message, it was not altered by an overzealous company bot.

Edit:

Us tomorrow: Your honor, my device clearly shows timestamps and allegedly offending message, but you will note the discrepancy in style, time and message that suggests that is not the case.

LLM judge: Guilty.

Edit2:

Amusingly, the problem has been a human problem all along.

throwaway173738

I love how Altman created the problem and is also selling a solution. “Getting you coming and going” as it were.

analog31

Even before the advent of AI, I already realized that most of what's written by people isn't worth reading. This is why I don't read most of the e-mails that I receive.

Before AI, it was hard for many people to write literate text. I was OK with that, if the text was worth reading. I don't need to be entertained, just informed.

The thing that gets me about AI is not that what it generates is un-original, but if it's trained on the bulk of human text, then what it generates is not worth reading.

michaeljx

I "prompt prong", i.e pass all the AI emails that I receive through an AI and ask it to write a response. At what point do we get the 2 AIs to email each other directly, without as bio-agents having to pretend that we wrote/read them?

Der_Einzige

Sorry, but any trick you think you have for detecting AI generated text is defeated by high temperature sampling (which works now with good samplers like min_p/top-n sigma) and the anti-slop sampler: https://github.com/sam-paech/antislop-sampler

You'll be able to detect someone running base ChatGPT or something - but even base ChatGPT with a temperature of 2 has a very, very different response style - and that's before you simply get creative with the system prompt.

And yes, it's trivial to get the model to not use em dashes, or the wrong kind of quotes, or any other tell you think you have against it.

ahowardm

In the era of AI every time I have to read a report I ask myself: if the author probably didn’t spend his time writing this, why should I waste mine reading it?