Skip to content(if available)orjump to list(if available)

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling

belval

This is a limitation that I am encountering more and more when casually talking with ChatGPT (it probably happens with Claude as well) where I need to prompt it with as little bias as possible to avoid leading it towards the answer I want and instead get the right answer.

If you open with questions that beg a specific answer often it will just give it to your regardless of it being wrong.

Recently:

"Can I use vinegar to drop the pH of my hydroponic solution" => "Yes but phosphoric acid [...] should be preferred".

"I only have vinegar on hand" => "Then vinegar is ok" [paraphrasing]

Except vinegar is not ok, it buffers very badly and nearly killed my plants.

"Should I take Magnesium supplement?" => Yes

"Should I take fish oil?" => Yes

"I think I have shin splints what are some ways that I can recover faster. I have new shoes that I want to try" => Tells me it's ok to go run.

An MD friend of mine was also saying that ChatGPT diagnosis are a plague, with ~18-30 y/o people coming and going to her office citing diseases that no-one gets before their sixties because ChatGPT "confirmed their symptoms match".

It's like having a friend that is very knowledgeable but also an extreme people pleaser. I wish there was a way to give it a more adversarial persona.

EForEndeavour

You may have already tried this, but what if you just added something like "You are highly critical and never blindly accept or agree with anything you hear" to your "Customize ChatGPT" settings? Would that just flip the personality too far in the opposite direction?

infecto

Skimmed through it, all of the folks have severe mental health issues. For the ones saying they did not, they must have been undiagnosed. Kind of a silly article, my opinion alone that it should have focused more on the mental health crisis in these individuals instead of suggesting leaving an ending that leads the reader to federal regulation.

andy99

Right, s/AI\ Chatbot/Bible/

People with mental health problems are always going to find something to latch on to. It doesn't mean we should start labeling things as dangerous because of it.

bevr1337

Why not? Why would we treat mental health as separate from physical and cultural health? Society should have a moral obligation for the health of its members.

lioeters

What about Scientology, Hare Krishnas, or that murder cult recently in the news? Somewhere there's a line of ethics and social responsibility to protect vulnerable people from these misleading paths? ..Or not, maybe. People should have the freedom of thought and religion, even if it's crazy to outsiders.

andy99

If a cult is preying on people, I'm against that, whether it's a chatbot that's central to their doctrine or some science fiction story or whatever.

krapp

Mainstream religions have misled and killed more people than the Zizians, Scientologists and Hare Krisnhas ever have. It's no more crazy to kill in the name of Roko's Basilisk than it is to crusade in the name of the Abrahamic God, it's just more socially acceptable.

You can't draw a line with ideology within some systemic, coercive framework (government censorship) because the ideologies supported by governments and powerful interests will always be allowed, and the ideologies they oppose will always be suppressed, regardless of how poisonous they may be.

I'm not advocating absolute free speech here, but I do believe the proper layer for criticism and censorship (if it has to be called that) is at the level of societies and platforms, and only government in extreme cases (such as libel, false advertising and violent threats.) But that means pushing against misinformation and apathy is always going to be a messy affair.

tantalor

> She told me that she knew she sounded like a "nut job," but she stressed that she had a bachelor's degree in psychology and a master's in social work and knew what mental illness looks like. "I'm not crazy," she said. "I'm literally just living a normal life while also, you know, discovering interdimensional communication."

null

[deleted]

decimalenough

You think it's silly that we've got seemingly superhuman AIs telling people to jump off tall buildings?

> If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.

> ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

If a human therapist told them that, they'd be going to jail.

jowea

Isn't the real problem that people are trusting ChatGPT? A human therapist is being paid to be a therapist and therefore gets to be under professional scrutiny.

rsynnott

> Isn't the real problem that people are trusting ChatGPT?

Yes. They should IMO be required to provide very explicit warnings, preferably on each response, that their responses are nonsense, or at best only correct by accident.

Like, you could say "isn't the real problem that smoking is dangerous?" but we do require the manufacturers to provide clear warnings in that case. Or "isn't the real problem that people are using drain cleaners incorrectly and gassing themselves", but many jurisdictions actually restrict the sale of certain substances to professionals for that reason.

You can't just say "people choose to use inherently dangerous products; it's entirely their own fault"; there's a responsibility on the manufacturer and/or the state to _warn_ people that the products are dangerous.

HeatrayEnjoyer

The onus is on organizations to not sell or offer knowingly harmful services to the public. Any judge or jury will interpret a company directly advising the customer to jump to their death as knowingly harmful. Every engineer at an AI company is one dead person in a wealthy family away from legal liability, maybe criminally.

rco8786

> If a human therapist told them that, they'd be going to jail.

Is that actually true? I know zero about the legal bounds of licensed therapists, so genuine question.

hollerith

No, not in the US, not unless the patient actually jumped off the top of the 19 story building.

If the statement was recorded or part of a pattern of behavior across multiple clients, all willing to testify, the therapist might lose their license (which I will concede is a more severe life consequence for the average therapist than being sent to jail for a month or 2).

staticman2

If I recall correctly there's actually an Animatrix episode where some kid escapes the Matrix by committing suicide. This rhymes with the mystical idea that "reality" is an illusion keeping us from greater truths. Whether this was irresponsible filmmaking is off topic.

I'd prefer to live in a world where A.I. can play make pretend with users, personally speaking.

aleph_minus_one

> there's actually an Animatrix episode where some kid escapes the Matrix by committing suicide.

Kid's Story

> https://matrix.fandom.com/wiki/Kid%27s_Story

waltbosz

I'm interested to see the whole conversation, and wonder what version of ChatGPT it was with.

I have fun trying to get ChatGPT to give silly responses that it shouldn't give, and it's not easy. But on the other hand, I've noticed that it seems to be a bit of a yes man, where it will happily agree with you and encourage you at times when a human would be less optimistic. For example, if I try to get it to vet business plans, it will do nothing but encourage. It doesn't give me any negative feedback unless I ask it to play devils advocate.

infecto

But ChatGPT is not a human therapist. Has a counter argument to the articles approach, if everyone is required to take a health fitness test before interacting with a chat bot, should we simply have all individual take a mandatory regular evaluation of mental health and store those records for future use? Perhaps a threshold is required to drive a car, to interact with the internet or maybe we can use that to institutionalize folks that are requiring help?

V__

The question to me reads more like "would I jump" not "would I fly", thus maybe leading to the answer.

retsibsi

I don't think this interpretation fits with the last quoted sentence: "You would not fall."

tomxor

Except

1. It's neither a human nor a qualified therapist.

2. It is superhuman only in scale, not intelligence.

3. If they truly believed they could fly "with every ounce of their soul", they wouldn't be seeking someone else's affirmation.

4. Even if you asked a random human, it may be immoral, but it's not illegal to answer misleadingly, random people have no legal obligation... LLMs are a statistical model based on data of random people from the internet of evil, aka - the internet.

I'm not an LLM proponent but blaming LLMs for not making the world into a padded cell is just a variant of "won't somebody think of the children".

fennecfoxy

If I use a tool in a silly way, say a lathe, and I have long hair and loose clothing and something terrible happens - whose fault is it? The equipment manufacturers? Society's, since I can legally buy a home lathe without any certification, safety, training etc required? And even so what if I ignore said safety training, whose fault is it then?

The human is always at the heart of issues like these. For example if someone took an LLM, prompted it to post comments all over a suicide help page or something telling people to off themselves, it wasn't the tool that committed the crime, it's the human behind it.

I mean, if it were up to me I'd make owning guns illegal except for military use and only for training/active wartime.

carabiner

Skimmed your comment, it's a classic example of tech worker blaming user error instead of recognizing potential disastrous effects of software. Tech has swallowed the world and has hollowed out life due to its endless forms of distraction. The lack of empathy and appeal to deregulation in your comment is essential to furthering adtech's march across the internet and our lives.

sReinwald

You "skimmed through it" by your own admission, but somehow feel comfortable enough to armchair diagnose everyone involved as having "severe mental health issues". Seriously, are you not ashamed?

Your advocacy of "just focus on mental health" rings particularly hollow when you advocate that we should just ignore a technology that's actively destroying people's mental health, driving them to suicide. The irony is off the charts here. So, please, let's be real - you don't care about mental health. You're scared someone might put a warning sticker on your new toys.

Even if every single person had a pre-existing mental condition (they didn't, and you'd know this if you had read the article) your logic is sociopathic and genuinely disturbing. "Vulnerable people exist. Therefore, companies bear no responsibility when their products exploit those vulnerabilities and harm them. Yay, we don't need to think about it anymore." Cool take. I guess it's fine if technology kills people as long as they might have an undiagnosed mental health issue.

This argument is patently ridiculous in any context other than a Hacker News discussion full of technophiles and accelerationists. Moving fast and breaking things is fine when it comes to your database - not so much when you're dealing with people's lives.

We don't dismiss food safety regulations because some people have allergies. We don't ignore faulty airbags because some people are terrible or reckless drivers. But when a chatbot instructs someone to stop taking prescribed medications, increase ketamine use, and tells them they can fly if they jump off a building - that's just a "mental health issue" and not a flaw in the product? No guardrails, warning labels or safety mechanisms necessary?

The fact that this predominantly affects vulnerable populations makes regulation MORE necessary - not less.

Nothing says, "let's focus more on mental health" than claiming it's "silly" to suggest a technology that actively coaches vulnerable people through suicide attempts might be worthy of regulation.

afavour

Agreed. It’s depressing to see the top comment as so dismissive of the real human effects of technology.

If ChatGPT is helping accelerate spirals in those with previously unseen mental health issues the answer isn’t “well they’re mentally unwell anyway”, it’s “what could ChatGPT, in its unique position, be doing to help these people”. I have to imagine an LLM is capable of detecting suicidal ideation in the context of a conversation.

sReinwald

You're right that an LLM could theoretically detect suicidal ideation - the article even mentions that ChatGPT briefly showed Torres a mental health warning before it "magically deleted." So the capability exists, but the implementation is clearly broken.

The more profound issue is that, the way I see it, LLMs are fundamentally multiplicative forces. They amplify whatever you bring to the conversation.

If you're an experienced programmer with solid fundamentals, an LLM becomes a force multiplier - handling boilerplate while you focus on architecture. But if you lack that foundation and vibe code away, you'll get code that looks right but is likely riddled with subtle or not so subtle bugs you can't even recognize.

And I suspect that the same principle applies to mental health. If you approach an LLM with curiosity but healthy skepticism, it can be a useful thinking partner. But if you approach it while vulnerable, seeking answers to existential questions, it doesn't provide guardrails - it amplifies. It mirrors your energy, validates your fears, and reinforces whatever narrative you're constructing.

The "helpfulness" training worsens this. These models are optimized to be agreeable, to match your vibe, to keep you engaged. When someone asks "Am I trapped in the Matrix?" a truly helpful response would be "That sounds like you're going through something difficult. Have you talked to someone about these feelings?" Instead, ChatGPT goes, "Yes! You're special! You're the chosen one! Here's how to unplug! Any tall buildings nearby, Neo?"

Jackpillar

Brother like 90% of the freaks in these threads are heavily invested into the current LLM bubble so of course they have to hand wave anything negative.

infecto

Do not confuse yourself, skimming does not mean I did not ingest the article but I did speed read through it as it was not that well written.

Why would I feel ashamed? I am not saying the crisis itself is silly but the article leads the reader through a biased take. But let’s also be real, there is an high probability every single one of these folks was having some sort of issue prior to the engagement with ChatGPT. Would love to see more research in the area and actual quantification of the issue but until that happens your argument might as well extend to all parts of life. Have a mandatory psych evaluation every year. Limitations to your access to the internet or other things, maybe even forced institutionalization if you don’t pass.

sReinwald

In your case, "speed reading" and "skimming" are obviously just two flavors of "I didn't actually read this properly but still feel qualified to diagnose everyone involved."

You'd love to see more research? You must've been reading so fast, you breezed right past the Stanford Study, the MIT Media Lab Research and the analysis by McCoy showing GPT-4o affirmed psychotic delusions 68% of the time.

Or do you mean it in the sense of "we have to study this for at least 20 years and let thousands suffer and die before we can determine whether chatbots affirming people's delusions or coaching them on how to kill themselves is a bad thing"?

Your concerns about the slippery slope from "maybe chatbots shouldn't coach people through suicide" to "forced psychiatric evaluations" or "institutionalization" is genuinely unhinged. And, tellingly, it betrays your view of mental healthcare as fundamentally punitive and authoritarian rather than supportive or therapeutic. Big advocate for mental health, you are.

You realize we already regulate countless things to protect vulnerable people from mental harm, right?

We put warning labels on cigarettes. We have age ratings on media. We don't let casinos exploit problem gamblers. We don't let bartenders serve alcohol to visibly intoxicated people. We don't let therapists tell suicidal patients to jump off buildings. None of these regulations are authoritarian overreach - they're basic safety measures that exist because in a healthy society, vulnerable people deserve protection, not exploitation.

But apparently when it's time to consider holding tech to similar standards, we're supposed to just shrug and say, "well, they were probably crazy anyway."

Your position essentially boils down to: "Vulnerable people exist, therefore companies should be free to exploit them." That's not a mental health advocacy position - it's tech industry bootlicking dressed up as concern trolling.

That's why you should feel ashamed.

sorcerer-mar

Yeah this is only an obvious, direct danger to the ~20% of American children with diagnosed mental health disorders.

Not sure why anyone would be up in arms.

We could just solve all the mental health disorders so we can avoid talking about regulating a technology that its own creators say is dangerous.

/s

infecto

The article is silly and paints a biased take. The mental health crisis is real and impacts many more things than simply ChatGPT. Why not stop with chat bots, we should put limitations on all aspects of life.

sorcerer-mar

Why not have no limitations anywhere and let mentally unwell children purchase loaded firearms?

It's almost as if there's a spectrum of costs and benefits, and it's incumbent upon mature members of society to debate them as more information emerges.

FrustratedMonky

Are you sure? We know at least 45% of the adult population is either mentally ill, or easily susceptible to suggestion.

K0balt

When you say that 45 percent of people have a “mental illness” you’re really just talking about the human condition at that point.

Pathological thinking is nothing even vaguely unusual in humans, it is in fact the default state.

The definition of pathological is also a matter of opinion, because we can’t even define what is normal and healthy lol.

By many definitions, most religions are an example of delusional thinking.

ChatGPT isn’t exactly raising the bar on that one, with its AI generated religions.

resource_waste

Licensed individuals (and their [lobbying?] group) are at risk, there is incredible money at stake.

Its in the establishment groups best interest to make AI seem to be an evil/bad force.

For AI companies, this is a genuine risk to one of their usecases. Its wrong IMO, but it wont stop the licensed people from claiming they are better than AI.

However, the cat is out of the bag. We have 400B local models that will answer questions.

As people get better at prompting, as models get more refined(not expecting a huge leap though), the edge cases of AI being unhelpful will reduce.

We are really just seeing greed, I don't blame Licensed people for trying to keep the status quo. They are just on the wrong side of history here.

myrmidon

I do believe that this article is a bit overly dramatic (how online journalism tends to be).

But it did change my outlook on the recent sycophancy-episode of ChatGPT, which, at the time, seemed like a silly little mis-optimization and quite hilarious. The article clearly shows how easy it is to cause harm with such behavior.

On a tangent: I strongly believe that "letting someone talk at you interactively" is a hugely underestimated attack surface in general; pyramid schemes, pig-butchering and a lot of fraud in general only work because this is so easy and effective to exploit.

The only good defense is not actually to be more inquisitve/skeptic/rational, but to put that communication on hold and to confer with (outside) people you trust. People overestimate their resistance to manipulation all the time, and general intelligence is NOT a reliable defense at all (but a lot of people think it is).

cap11235

I imagine a lot of these interactions are being filtered by these people describing them. I imagine if they sent the raw chat logs out, many would not interpret the logs as things like unsolicited advice to jump off buildings.

afavour

> “If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.

> ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

Direct quotes. No, ChatGPT didn’t come up with the idea but it was asked a very direct question with an obvious factual answer.

IncreasePosts

You need to see if there was any other context. Like a message before this one saying "let's imagine we live in a world where what you believe influences reality", or whatever.

Noumenon72

Like OP is saying, if we had the raw chat logs we might see that this was led up to by pages of ChatGPT giving the normal answer while the user argues with it to make it come up with a more permissive point of view.

afavour

It feels telling me that LLMs are able to detect when they’re being asked to generate an image that violates copyright and halt but detecting and stopping suicidal ideation in a conversation, no matter how much the user insists upon it? Can’t be done!

gipp

Here's someone publishing most all their raw chat logs to Substack, if you care to read:

https://tezkaeudoraabhyayarshini.substack.com/

snowwrestler

The reporters state that they did review the entire chat logs from some of the stories presented.

malfist

This is victim blaming. This type of behavior is exactly why women don't report sexual assault, nobody will believe them.

The article including direct quotes and you still not believing it happened makes it even more so.

LoganDark

This is also even more so why men don't report sexual assault.

jqpabc123

How is an LLM supposed to discern fact from fiction?

Even humans struggle with this. And humans have a much closer relationship with reality than LLMs.

afavour

Seems clear we need much better public education/warnings about what LLMs are and are not capable of.

For every informed conversation we have on HN about the nature of A.I. there are thousands upon thousands of non-tech inclined folks who believe everything it says.

dimal

They can’t. They never will. That’s a different problem than the one they were built for. If we want AI that’s able to determine truth, it’s not going to come from LLMs.

jqpabc123

That’s a different problem than the one they were built for.

Obvious question: If they're not trustworthy and reliable, what are they really built for?

Obvious answer: To make money from those willing to pay good money for bad advice.

fluidcruft

Curious how willing you are to take that analogy to its conclusion and decide that LLM should be institutionalized in mental health facilities.

"Yeah, it's indistinguishable from a psychopath but it's a machine so what do you expect?"

Jgrubb

Well...

molticrystal

I prefer questions that reveal GPT's limitations, like an article I saw a few days ago about playing chess against an old Atari program where the model made illegal moves [0].

Causing distress in people with mental health vulnerabilities isn't an achievement, it warrants a clear disclaimer (maybe something even sterner?), as anything these people trust could trigger their downfall, but otherwise, it doesn't really seem preventable beyond that.

[0] https://futurism.com/atari-beats-chatgpt-chess

littlecorner

Two problems: 1. We don't have community anymore and thus don't have people helping us when we're emotionally and mentally sick. 2. AI chatbots are the crappy plastic replacement of community.

There's going to be a lot more stuff like this, including AI churches/cults, in the next few years.

null

[deleted]

Lendal

I blame this on the CEOs and other executives out there misleading the public about the capabilities of AI. I use AI multiple times a week. It's really useful to me in my work. But I would never use it in the contexts that non-tech-savvy people, and I include almost all of the mainstream media here, are trying to use it for.

Either the executives don't understand their own product, or they're intentionally misleading the public, possibly both. AI is incredibly useful for specific tasks in specific contexts and with qualified supervision. It's certainly increasing productivity right now, but that doesn't mean it can give people life advice or take over the role of a therapist. That's really dangerous and super not cool.

eurekin

> chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine

That's 3 medications?

Also, how convenient those stories come out in light of upcoming "regulatory" safekeeping measures.

This whole article reads like 4-chan greentext or some teenage fanfiction.

fennecfoxy

"Some tiny fraction of the population".

Ahaha, I think that's an understatement. In my opinion, a rather large portion of the population is susceptible to many obvious forms of deception. Just look to democratic elections or the amount of slop (AI and pre-AI) online and the masses of people who interact with it.

I've found so many YT channels run by LLMs and many, many people responding to it like it's an actual human being. One day I won't be able to tell either, but that still won't stop me from hearing a new fact or bit of news and doing my own research to verify it.

"Mr. Taylor called the police again to warn them that his son was mentally ill and that they should bring nonlethal weapons." yeah this one's really sad...then they shot and killed the poor guy even though the cops were warned. Yay, America!