Skip to content(if available)orjump to list(if available)

People are losing loved ones to AI-fueled spiritual fantasies

gngoo

Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.

Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.

rnd0

I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.

The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.

I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).

What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.

rnd0

The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?

I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?

codr7

Being surrounded by people who follow every nudge and agree with everything you say never leads anywhere worth going.

This is likely worse.

That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).

kayodelycaon

Kind of sounds like my grandparents watching cable news channels all day long.

stevage

Fascinating and terrifying.

The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.

hyeonwho4

The default setting on ChatGPT is to now include previous conversations as context. I disabled memories, but this new feature was enabled when I checked the settings.

MontagFTB

Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?

westurner

An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.

lr4444lr

It's probably better for society if schizophrenic, paranoid, and psychotic people listen to voices that are managed by RLHF rather than the voices in their head.

colonial

[delayed]

zdragnar

It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.

I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.

I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.

bigyabai

Even when sycophantic patterns emerge?

thrance

I think the last think a delusional person needs is external confirmation, be it human or a sycophantic machine.

jsheard

If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.

delichon

An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.

You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.

When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.

nullc

On what basis do you assume that that isn't exactly what "safety alignment" means, among other things?

null

[deleted]

bell-cot

Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.

alganet

You are a conspiracy theorist and a liar! /s

The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.

Very simple answer.

Is OpenAI also doing it? Well, it was trained on people.

People need to get better. Kinder. Less combative, less jokey, less provocative.

We're not gonna get there. Ever. This problem precedes AI by decades.

The article is an old recipe for dealing with this kind of realization.

marcus_holmes

Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?

People gonna people. Journalists gonna journalist.

Havoc

>spiral starchild

>river walker

>spark bearer

OK maybe we put a bit less teen fiction novels in the training data...

I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.

sien

Is this better or worse than a fortune teller ?

It's something to think through.

derektank

Probably cheaper

To quote my favorite Smash Mouth song,

"Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.

Just $6.95 for the very first minute I think you won the lottery, that's my prediction."