Skip to content(if available)orjump to list(if available)

OpenAI ends legal and medical advice on ChatGPT

o11c

When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.

entropicdrifter

Can't wait for AI nutritionists to kill people on crash diets

mirabilis

An AI-related bromide poisoning incident earlier this year: “Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet. For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning… However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”

https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260

tacker2000

Aka software engineers…

Lionga

maybe that is why they opened the system to porn, as everything else will be soon gone.

miki123211

> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

Is this an actual technical change, or just legal CYA?

bearhall

I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.

I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.

doctoboggan

I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.

Zr01

The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.

fluidcruft

I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.

It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.

benrapscallion

This (attribution) is exactly the issue that was mentioned by LexisNexis CEO in a recent The Verge interview.

https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...

segmondy

Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.

miltonlost

> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.

You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.

Zr01

I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.

watwut

If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.

scarmig

And then users balk at the hefty fee and start getting their medical information from utopiacancercenter.com and the like.

cpfohl

Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.

rafaelmn

>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

Something I've noticed is that it's much easier to lead the LLM to the answer when you know when you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.

Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.

schiffern

  >checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as we observe in human doctors?

tencentshill

We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.

throwaway290

this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?

cj

He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.

Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.

If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).

The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.

ninininino

You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.

PragmaCode

It's not stopping to give legal/medical advice to the user, but it's forbidden to use ChatGPT to pose as an advisor giving advice to others: https://www.tomsguide.com/ai/chatgpt/chatgpt-will-still-offe...

trollbridge

One wonders how exactly this will be enforced.

fainpul

It's not about enforcing this, it's about OpenAI having their asses covered. The blame is now clearly on the user's side.

OutOfHere

It was already enforced by hiding all custom GPTs that offered medical advice.

bstsb

i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT

uslic001

As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.

null

[deleted]

learnplaceai

Sad times - I used ChatGPT to solve a long-term issue!

randycupertino

Sounds like it is still giving out medical and legal information just adding CYA disclaimers.

mikkupikku

It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.

(Turns out I would need permits :-( )

doctoboggan

It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.

emaccumber

good thing that guy was able to negotiate his hospital bills before this went into effect.