Skip to content(if available)orjump to list(if available)

Time to Act on the Risk of Efficient Personalized Text Generation

potato3732842

The risk is not that one cannot forge correspondance in the style of another.

The risk is that any one of us peasants can do it without having to have a bunch of other people in on it

memhole

That’s exactly what the problem is.

I’ve done some content work using LLMs. Once I started to think about how inevitably it’ll get coupled with ad networks and how anybody can do this stuff, it made me go this isn’t good.

On the bright side, it might push us back to paper or other means of exchanging info. The cost should be prohibitive enough that it increases the quality of content. That’s very hypothetical, though. Mailers are already a direct contradiction.

PaulHoule

My fear is that it will be used for scams

https://archive.ph/uMRXa

Ordinary people have trouble seducing other people because they can't deliver perfect mirroring because of their own self (e.g. they are uncomfortable adapting to another person's emotional demands because of the needs of their own self or aspects of their self that are unappealing to the other person manifest) Sociopaths and people with narcissistic personality disorder do better than most people precisely because their self is less developed.

An A.I. has no self so it has no limits.

deadbabe

Imagine being catfished for years, and in the end you discover that not only was the person who catfished you not who they said they were, they weren’t even human.

nullc

"Doesn't matter, had cybersex"

yorwba

Do sociopaths and people with narcissistic personality disorder do better at seduction? How would we know? Would a double-blind experiment setting up blind dates between sociopaths and average people to rate their seductiveness even be ethical if sociopaths are dangerously skilled at it?

memhole

I'm not sure about seduction. Afaik, one of the defining traits is being very adept at manipulation.

Jimmc414

The proliferation of harmful AI capabilities has likely already occurred. Its naive to not accept this reality when tackling this. A more realistic approach would be to focus on building more robust systems for detection, attribution, and harm mitigation.

The paper makes some good points - it doesn't take a lot of data to convincingly emulate a writing style (~75 emails) and there is a significant gap in legislation as most US "deepfake" legislation explicitly excludes text and focuses heavily on image/video/audio

Hizonner

Most readers aren't attentive enough to notice a person's style enough that you have to resort to an LLM to fake it.

Zak

I don't think the important risk of efficient personalized text generation is impersonation as the article claims.

Humanity has already seen harmful effects from social media algorithms that efficiently identify content a person can't turn away from even if they consciously want to. The prospect of being able to efficiently generate media that will be maximally persuasive to each individual viewer on any given issue is terrifying.

memhole

I was actually looking into this idea. Using AI to select the content that would achieve the most engagement. Click bait and rage bait certainly exists. I'm not entirely convinced that having optimized content is really what matters so much as having it exist for people to see and getting it in front of as many people as possible. My own thoughts are definitely a little mixed. Video content might be a little different. I was only looking at text content.