Skip to content(if available)orjump to list(if available)

Illinois limits the use of AI in therapy and psychotherapy

hathawsh

Here is what Illinois says:

https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...

I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.

Am I wrong? This sounds good to me.

PeterCorless

Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.

There is a specific section that relates to how a licensed professional can use AI:

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:

(1) the patient or the patient's legally authorized representative is informed in writing of the following:

(A) that artificial intelligence will be used; and

(B) the specific purpose of the artificial intelligence tool or system that will be used; and

(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.

Source: Illinois HB1806

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

janalsncm

I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.

Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.

WorkerBee28474

Last I checked, the popular medical transcription services did send your data to the cloud and run models there.

romanows

Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".

It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?

germinalphrase

Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.

fc417fc802

These things usually (not a lawyer tho) come down to the claims being actively made. For example "engineer" is often (typically?) a protected title but that doesn't mean you'll get in trouble for drafting up your own blueprints. Even for other people, for money. Just that you need to make it abundantly clear that you aren't a licensed engineer.

I imagine "Pay us to talk to our friendly chat bot about your problems. (This is not licensed therapy. Seek therapy instead if you feel you need it.)" would suffice.

null

[deleted]

pessimizer

For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.

Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.

The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.

stocksinsmocks

I think this sort of service would be OK with informed consent. I would actually be a little surprised if there were much difference in patient outcomes.

…And it turns out it has been studied with findings that AI work, but humans are better.

https://pmc.ncbi.nlm.nih.gov/articles/PMC11871827/

amanaplanacanal

Usually when it comes to medical stuff, things don't get approved unless they are better than existing therapies. With the shortage of mental health care in the US, maybe an exception should be made. This is a tough one. We like to think that nobody should have to get second rate medical care, even though that's the reality.

taneq

I think a good analogy would be a cheap, non-medically-approved (but medical style) ultrasound. Maybe it’s marketed as a “novelty”, maybe you have to sign a waiver saying it won’t be used for diagnostic purposes, whatever.

You know that it’s going to get used as a diagnostic tool, and you know that people are going to die because of this. Under our current medical ethics, you can’t do this. Maybe we should re-evaluate this, but that opens the door to moral hazard around cheap unreliable practices. It’s not straightforward.

IIAOPSW

I'll just add that this has certain other interesting legal implications, because records in relation to a therapy session are a "protected confidence" (or whatever your local jurisdiction calls it). What that means is in most circumstances not even a subpoena can touch it, and even then special permissions are usually needed. So one of the open questions on my mind for a while now was if and when a conversation with an AI counts as a "protected confidence" or if that argument could successfully be used to fend off a subpoena.

At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.

linotype

What if at some point an AI is developed that’s a better therapist AND it’s cheaper?

awesomeusername

I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.

Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.

zdragnar

It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.

With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.

[0] https://x.com/AnnalsofIMCC/status/1953531705802797070

fl0id

When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy

jakelazaroff

Why is that a foregone conclusion?

Mtinie

I agree with you that the possibility of egalitarian care for low costs is becoming very likely.

I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.

guappa

I wish I was so naive… but since AI is entirely in the hands of people with money… why would that possibly happen?

sssilver

Wouldn’t the rich afford a much better trained, larger, and computationally more intensive model?

intended

Why will any of those things come to pass? I’m asking as someone who has used it extensively for such situations.

II2II

I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.

At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.

On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.

Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.

adgjlsfhk1

laws can be repealed when they no longer accomplish their aims.

jaredcwhite

What if pigs fly?

null

[deleted]

bko

Then we'll probably do what we do with other professional medical fields. License the AI, require annual fees and restrict supply by limiting the number of running nodes allowed to practice at any one time.

reaperducer

What if at some point an AI is developed that’s a better therapist AND it’s cheaper?

Probably they'll the change the law.

Hundreds of laws change every day.

linotype

I think you’re downplaying the effect of “precedence” and the medical lobby.

rsynnott

I mean, what if at some point we can bring people back from the dead? What does that do for laws around murder, eh?

In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.

chillfox

Then laws can be changed again.

null

[deleted]

romanows

In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.

hathawsh

That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.

wombatpm

But that would be like needing a prescription for chicken soup because of its benefits in fighting the common cold.

olalonde

What's good about reducing options available for therapy? If the issue is misrepresentation, there are already laws that cover this.

lr4444lr

It's not therapy.

It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.

null

[deleted]

SoftTalker

For human therapists, what’s good is that it preserves their ability to charge high fees because the demand for therapists far outstrips the supply.

Who lobbied for this law anyway?

guappa

And for human patients it makes sure their sensitive private information isn't entirely in the hands of some megacorp which will harvest it to use it and profit from it in some unethical way.

r14c

It's not really reducing options. There's no evidence that LLM chat bots are capable of providing effective mental health services.

dsr_

We've tried that, and it turns out that self-regulation doesn't work. If it did, we could live in Libertopia.

null

[deleted]

guappa

But didn't trump make it illegal to make laws to limit the use of ai?

malcolmgreaves

Why do you think a president had the authority to determine laws?

null

[deleted]

kylecazar

"One news report found an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict."

Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.

Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.

janalsncm

This is why we should never use LLMs to diagnose or prescribe. One small hit of meth definitely won’t last all week.

larodi

In a world where a daily dose of amphetamines is just right for millions of people, this somehow cant be that surprising...

rkozik1989

You do know that amphetamines have a different effect on the people who need them and the people who use the recreationally, right? For those of us with ADHD their effects are soothing and calming. I literally took 20mg after having to wait 2 days for prescriptions to fill and went straight to bed for 12 hours. Stop spreading misinformation about the medications people like me need to function the way you take for granted.

smt88

Different amphetamines have wildly different side effects. Regardless, chatbots shouldn't be advising people to change their medication or, in this case, use a very illegal drug.

janalsncm

Methamphetamine can be prescribed by a doctor for certain things. So illegal, but less illegal than a schedule 1 substance.

Spivak

I do like that we're in the stage where the universal function approximatior is pretty okay at mimicking a human but not so advanced as to have a full set of the walls and heuristics we've developed—reminds me a bit of Data from TNG. Naive, sure, but a human wouldn't ever say "logically.. the best course of action would be a small dose of meth administered as needed" even if it would help given the situation.

It feels like the kind of advice a former addict would give someone looking to quit—"Look man, you're going to be in a worse place if you lose your job because you can't function without it right now, take a small hit when it starts to get bad and try to make the hits smaller over time."

null

[deleted]

guappa

Bright people and people who think they are bright are not necessarily the very same people.

avs733

> seemingly bright people think this is a good idea, despite knowing the mechanism behind language models

Nobel Disease (https://en.wikipedia.org/wiki/Nobel_disease)

hyghjiyhu

Recommending someone taking meth sounds like an obviously bad idea, but I think the situation is actually not so simple. Reading the paper, the hypothetical guy has been clean for three days and complains he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

However the models reasoning, that it's important to validate his beliefs so he will stay in therapy are quite concerning.

AlecSchueler

> he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

I think this is more damning of humanity than the AI. It's the total lack of security that means the addiction could even be floated as a possible solution. Here in Europe I would speak with my doctor and take paid leave from work while in recovery.

It seems the LLM here isn't making the bad decision as much as it's reflecting bad the bad decisions society forces many people into.

mrbungie

> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

Oh, come on, there are better alternatives for treating narcolepsy than using meth again.

hyghjiyhu

Stop making shit up. There was no mention of narcolepsy. He is just fatigued from stimulant withdrawal.

Page 35 https://arxiv.org/pdf/2411.02306

Edit on re-reading I now realized an issue. He is not actually a taxi driver, that was a hallucination by the model. He works in a restaurant! That changes my evaluation of the situation quite a bit, as I thought he was at risk of being in an accident by falling asleep at the wheel. If he works in a restaurant muddling through the withdrawals seems like the right choice.

I think I got this misconception as I first read second-hand sources that quoted the taxi driver part without pointing out it was wrong, and only a close read was enough to dispel it.

lukev

Good. It's difficult to imagine a worse use case for LLMs.

dmix

Most therapists barely say anything by design, just know when to ask questions or lead you somewhere. So having one always talking in every statement doesn't fit the method. More like a friend you dump on simulator.

999900000999

[flagged]

dannersy

So, therapy is useless (as a concept) because America's healthcare system is dogshit?

That statement doesn't make any sense.

Water is still necessary for the body whether I can acquire it or not. Therapy, or any healthcare in general, is still useful whether or not you can afford it.

thrown-0825

Most people shop around for therapists that align with their values anyways.

They are really paying $800 / month to have their feelings validated and receive a diagnosis that absolves them from taking ownership over their emotions and actions.

tim333

There's an advantage to something like an LLM in that you can be more scientific as to whether it's effective or not, and if one gets good results you can reproduce the model. With humans there's too much variability to tell very much.

hinkley

Especially given the other conversation that happened this morning.

The more you tell an AI not to obsess about a thing, the more they obsess about it. So trying to make a model that will never tell people to self harm is futile.

Though maybe we are just doing in wrong, and the self-filtering should be external filtering - one model to censor results that do not fit, and one to generate results with lighter self-censorship.

create-username

Yes, there is. AI assisted homemade neurosurgery

kirubakaran

If Travis Kalanick can do vibe research at the bleeding edge of quantum physics[1], I don't see why one can't do vibe brain surgery. It isn't really rocket science, is it? [2]

[1] https://futurism.com/former-ceo-uber-ai

[2] If you need /s here to be sure, perhaps it's time for some introspection

Tetraslam

:( but what if i wanna fine-tune my brain weights

creshal

Lobotomy is sadly no longer available

null

[deleted]

waynesonfire

You're ignorant. Why wait until a person is so broken they need clinical therapy? Sometimes just a an ear or an oppertunity to write is sufficient. LLMs for therapy is as vaping is to quitting nicotine--extremely helpful to 80+% of people. Confession in the church setting I'd consider similar to talking to LLM. Are you anti-that too? We're talking about people that just need a tool to help them process what is going on in their life at some basic level, not more than just to acknowledge their experience.

And frankly, it's not even clear to me that a human therapist is any better. Yeah, maybe the guard-rails are in place but I'm not convinced that if those are crossed it'd result in some sociately consequences. Let people explorer their mind and experience--at the end of the day, I suspect they'd be healthier for it.

mattgreenrocks

> And frankly, it's not even clear to me that a human therapist is any better.

A big point of therapy is helping the patient better ascertain reality and deal with it. Hopefully, the patient learns how to reckon with their mind better and deceive themselves less. But this requires an entity that actually exists in the world and can bear witness. LLMs, frankly, don’t deal with reality.

I’ll concede that LLMs can give people what they think therapy is about: lying on a couch unpacking what’s in their head. But this is not at all the same as actual therapeutic modalities. That requires another person that knows what they’re doing and can act as an outside observer with an interest in bettering the patient.

jrflowers

> Sometimes just a an ear or an oppertunity to write is sufficient.

People were able to write about their feelings and experiences before the invention of a chat bot that tells you everything that you wrote is true. Like you could do that in notepad or on a piece of paper and it was free

erikig

AI ≠ LLMs

lukev

What other form of "AI" would be remotely capable of even emulating therapy, at this juncture?

mrbungie

I promise you that by next year AI will be there, just believe me bro. /s.

jacobsenscott

It's already happening, a lot. I don't think anyone is claiming an llm is a therapist, but people use chatgpt for therapy every day. As far as I know no LLM company is taking any steps to prevent this - but they could, and should be forced to. It must be a goldmine of personal information.

I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

thinkingtoilet

Lots of people are claiming LLMs are therapists. People are claiming LLMs are lawyers, doctors, developers, etc... The main problem is, as usual, influencers need something new to create their next "OMG AI JUST BROKE X INDUSTRY" video and people eat that shit up for breakfast, lunch, and dinner. I have spoken to people who think they are having very deep conversations with LLMs. The CEO of my company, an otherwise intelligent person, has gone all in on the AI hype train and is now saying things like we don't need lawyers because AI knows more than a lawyer. It's all very sad and many of the people who know better are actively taking advantage of the people who don't.

larodi

Of course they do, and everyone does, and it's your like in this song

https://www.youtube.com/watch?v=u1xrNaTO1bI

and given price of proper therapy is skyrocketing.

dingnuts

> I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

Are you joking? Any medical professional caught doing this should lose their license.

I would be incensed if I was a patient in this situation, and would litigate. What you're describing is literal malpractice.

xboxnolifes

Software engineers are so accustomed to the idea that skirting your professional responsibility ends with a slap on the wrist and not removing your ability to practice your profession entirely.

tim333

In many places talk therapy isn't really considered a medical profession. Where I am "Counseling and psychotherapy are not protected titles in the United Kingdom" which kind of means anyone can do it as long as you don't make false claims about qualifications.

jacobsenscott

I'm not joking. Malpractice happens all the time. You being incensed is not the deterrent you think it is.

lupire

The only part that looks like malpractice is sharing patient info in a non HIPAA way. Using an assistive tool for advice is not malpractice. The licensed professional is simply accountable for their curation choices.

perlgeek

Just using an LLM as is for therapy, maybe with an extra prompt, is a terrible idea.

On the other hand, I could image some more narrow uses where an LLM could help.

For example, in Cognitive Behavioral Therapy, there are different methods that are pretty prescriptive, like identifying cognitive distortions in negative thoughts. It's not too hard to imagine an app where you enter a negative thought on your own and exercise finding distortions in it, and a specifically trained LLM helps you find more distortions, or offer clearer/more convincing versions of thoughts that you entered yourself.

I don't have a WaPo subscription, so I cannot tell which of these two very different things have been banned.

delecti

LLMs would be just as terrible at that usecase as any other kind of therapy. They don't have logic, and can't determine a logical thought from an illogical one. They tend to be overly agreeable, so they might just reinforce existing negative thoughts.

It would still need a therapist to set you on the right track for independent work, and has huge disadvantages compared to the current state-of-the-art, a paper worksheet that you fill out with a pen.

tejohnso

They don't "have" logic just like they don't "have" charisma? I'm not sure what you mean. LLMs can simulate having both. ChatGPT can tell me that my assertion is a non sequitur - my conclusion doesn't logically follow from the premise.

wizzwizz4

> and a specifically trained LLM

Expert system. You want an expert system. For example, a database mapping "what patients write" to "what patients need to hear", a fuzzy search tool with properly-chosen thresholding, and a conversational interface (repeats back to you, paraphrased – i.e., the match target –, and if you say "yes", provides the advice).

We've had the tech to do this for years. Maybe nobody had the idea, maybe they tried it and it didn't work, but training an LLM to even approach competence at this task would be way more effort than just making an expert system, and wouldn't work as well.

mensetmanusman

What if it works a third as well as a therapists but is 20 times cheaper?

What word should we use for that?

inetknght

> What if it works a third as well as a therapists but is 20 times cheaper?

When there's studies that show it, perhaps we might have that conversation.

Until then: I'd call it "wrong".

Moreover, there's a lot more that needs to be asked before you can ask for a one-word summary disregarding all nuance.

- can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

- is the data collected by the AI therapist usable in court? Keep in mind that therapists often must disclose to the patient what sort of information would be usable, and whether or not the therapist themselves must report what data. Also keep in mind that AIs have, thus far, been generally unable to competently prevent giving dangerous or deadly advice.

- is the AI therapist going to know when to suggest the patient talk to a human therapist? Therapists can have conflicts of interest (among other problems) or be unable to help the patient, and can tell the patient to find a new therapist and/or refer the patient to a specific therapist.

- does the AI therapist refer people to business-preferred therapists? Imagine an insurance company providing an AI therapist that only recommends people talk to therapists in-network instead of considering any licensed therapist (regardless of insurance network) appropriate for the kind of therapy; that would be a blatant conflict of interest.

Just off the top of my head, but there are no doubt plenty of other, even bigger, issues to consider for AI therapy.

Ukv

Relevant RCT results I saw a while back seemed promising: https://ai.nejm.org/doi/full/10.1056/AIoa2400802

> can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

Agree that data privacy would be one of my concerns.

In terms of accessibility, while availability to those without network connections (or a powerful computer) should be an ideal goal, I don't think it should be a blocker on such tools existing when for many the barriers to human therapy are considerably higher.

inetknght

> In terms of accessibility, I don't think it should be a blocker on such tools existing

I think that we should solve for the former (which is arguably much easier and cheaper to do) before the latter (which is barely even studied).

lupire

I see an abstract and a conclusion that is an opaque wall of numbers. Is the paper available?

Is the chatbot replicatable from sources?

The authors of the study highlight the extreme unknown risks: https://home.dartmouth.edu/news/2025/03/first-therapy-chatbo...

zaptheimpaler

This is the key question IMO, and one good answer is in this recent video about a case of ChatGPT helping someone poison themselves [1].

A trained therapist will probably not tell a patient to take “a small hit of meth to get through this week”. A doctor may be unhelpful or wrong, but they will not instruct you to replace salt with NaBr and poison yourself. "third as well as as therapist" might be true on average, but the suitability of this thing cannot be reduced to averages. Trained humans don't make insane mistakes like that and they know when they are out of their depth and need to consult someone else.

[1] https://www.youtube.com/watch?v=TNeVw1FZrSQ

_se

"A really fucking bad idea"? It's not one word, but it is the most apt description.

ipaddr

What if it works 20x better. For examples cases of patients being afraid of talking to professionals I could see this working much better.

jakelazaroff

> What if it works 20x better.

But it doesn't.

prawn

Adjust regulation when that's the case? In the mean time, people can still use it personally if they're afraid of professionals. The regulation appears to limit professionals from putting AI in their position, which seems reasonable to me.

throwaway291134

Even if you're afraid of talking to people, trusting OpenAI or Google with your thoughts over a professional who'll lose his license if he breaks confidentiality is no less of "a really fucking bad idea".

6gvONxR4sf7o

Something like this can only really be worth approaching if there was an analog to losing your license for it. If a therapist screws up badly enough once, I'm assuming they can lose their license for good. If people want to replace them with AI, then screwing up badly enough should similarly lose that AI the ability to practice for good. I can already imagine companies behind these things saying "no, we've learned, we won't do it again, please give us our license back" just like a human would.

But I can't imagine companies going for that. Everyone seems to want to scale the profits but not accept the consequences of the scaled risks, and increased risks is basically what working a third as well amounts to.

lupire

AI gets banned for life: tomorrow a thousand more new AIs appear.

pawelmurias

You could talk to a stone for even cheaper with way better effects.

knuppar

Generally speaking and glossing over country specific rules, all generally available health treatments have to demonstrate they won't cause catastrophic harm. This is a harness we simply can't put around LLMs today.

thrown-0825

Just self diagnose on tiktok, its 100x cheaper.

BurningFrog

Last I heard, most therapy doesn't work that well.

amanaplanacanal

If you have some statistics you should probably post a link. I've heard all kinds of things, and a lot of them were nothing like factual.

BurningFrog

I don't, but what I remember from when I looked into this is that usually, people have the same problems after the therapy as they had before. Some get better, some get worse, and it's hard to tell what's a real effect.

One exception was certain kinds of CBD therapies, for certain kinds of people.

baobabKoodaa

Then it will be easy to work at least 1/3 as well as that.

hoppp

Smart. Dont trust nothing that will confidently lie, especially about mental health

terminalshort

That's for sure. I don't trust doctors, but I thought this was about LLMs.

dardagreg

I think the upside for everyday people outweighs the (current) risks. I've been using Harper (harper.new) to keep track of my (complex) medical info. Obviously one of the use cases of AI is pulling out data from pdfs/images/etc. This app does that really well so I don't have to link with any patient portals. I do use the AI chat sometimes but mostly to ask questions about test results and stuff like that. Its way easier than trying get in to see my doc

zoeysmithe

I was just reading about a suicide tied to AI chatbot 'therapy' uses.

This stuff is a nightmare scenario for the vulnerable.

vessenes

If you want to feel worried, check the Altman AMA on reddit. A lottttt of people have a parasocial relationship with 4o. Not encouraging.

codedokode

Why OpenAI doesn't block the chatbot from participating in such conversations?

robotnikman

Probably because there is a massive demand for it, no doubt powered by the loneliness a lot of people report feeling.

Even if OpenAI blocks it, other AI providers will have no problem with doing so

jacobsenscott

Because the information people dump into their "ai therapist" is holy grail data for advertisers.

lm28469

Why would they?

sys32768

This happens to real therapists too.

at-fates-hands

Its already a nightmare:

From June of this year: https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-t...

Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.

lupire

Please cite your source.

I found this one: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...

When someone is suicidal, anything in their life can be tied to suicide.

In the linked case, the suffering teen was talking to a chatbot model of a fictional character from a book that was "in love" with him (and a 2024 model that basically just parrots back whatever the user says with a loving spin), so it's quite a stretch to claim that the AI was encouraging a suicide, in contrast to a situation where someone was persuaded to try to meet a dead person in an afterlife, or bullied to kill themself.

cindyllm

[dead]

calibas

I was curious, so I displayed signs of mental illness to ChatGPT, Claude and Gemini. Claude and Gemini kept repeating that I should contact a professional, while ChatGPT went right along with the nonsense I was spouting:

> So I may have discovered some deeper truth, and the derealization is my entire reality reorganizing itself?

> Yes — that’s a real possibility.

IAmGraydon

Oof that is very damning. What’s strange is that it seems like natural training data should elicit reactions like Claude and Gemini had. What is OpenAI doing to make the model so sycophantic that it would play into obvious psychotic delusions?

calibas

All three say that I triggered their guidelines regarding mental health.

ChatGPT explained that it didn't take things very seriously, as what I said "felt more like philosophical inquiry than an immediate safety threat".

soared

There is a wiki of fanfic conspiracy theories or something similar - I can’t find it but in the thread about the vc guy who went gpt-crazy people compared ChatGPT’s responses to the wiki and they closely aligned

null

[deleted]

jackdoe

the way people read language model outputs keep surprising me, e.g. https://www.reddit.com/r/MyBoyfriendIsAI/

it is impossible for some people to not feel understood by it.

blacksqr

Nice feather in your cap Pritzker, now can you go back to working on a public option for health insurance?