The A.I. Radiologist Will Not Be with You Soon
94 comments
·May 14, 2025Workaccount2
heyitsguay
There are many, many papers and projects out there about tuning foundation models on various types of medical imaging data, and many organized efforts to produce large medical imaging datasets to feed that training. This stuff is well-known in the trenches and can improve on the older, smaller CNNs in some ways, but not in a way that's produced any step change in automated capabilities. People are certainly working on it!
seesthruya
Agreed
seesthruya
Working radiologist here, 20 years experience.
This article is surprisingly accurate. I fully expect to finish my career without being 'replaced' by AI.
Happy to debate/answer questions :-)
TuringNYC
>> Happy to debate/answer questions :-)
Curious -- do you think that is because
1. the technology isnt there, or
2. because it isnt a competitive market (basically, the American Board of Radiologists controls standards of practice and can slow down technologies seen as competitive to human doctors)?
3. or perhaps 1 doesnt happen because outsiders know the market is guarded by the market itsself?
belly_joe
I would say generally speaking that people who assume AI will replace somebody else's job believe that these jobs are merely mechanical and there is no high-level reasoning involved that would basically require AGI (when that comes about nobody is safe). So the model of the AI radiologist assumes the only job of a radiologist is to classify images, which is pretty vulnerable to near-future disruption.
I imagine, given the training involved, the job involves more than just looking at pictures? This is what I would like to see explained.
The analogy would be the "95% of code is written by AI" stat that gets trotted out, replacing code with image evaluation. Yes AI will write the code but someone has to tell the AI what to write which is the tricky part.
tbrownaw
We already have AI taxis (in specific limited areas, but still). Driving isn't something I'd usually call "merely mechanical".
tintor
Driving (in US) is considered unskilled labor.
seesthruya
100%
Al-Khwarizmi
If (as acknowledged in the article) AI automates at least part of the work of radiologists (e.g. tool that "saves her 15 to 30 minutes each time she examines a kidney image"), don't you fear that the demand of radiologists will decline? Even if some are still needed, surely if a hospital needs X reports per day and now Y radiologists are sufficient to provide them rather than the current Z (Y<Z), that should be something for people considering your career to take into account?
On the other hand, how much of your confidence in not being replaced stems from AI not being able to do the work, and how much from legal/societal issues (a human needing to be legally responsible for the diagnoses)? Honestly the description in the article of what a radiologist does "Radiologists do far more than study images. They advise other doctors and surgeons, talk to patients, write reports and analyze medical records. After identifying a suspect cluster of tissue in an organ, they interpret what it might mean for an individual patient with a particular medical history, tapping years of experience" doesn't strike me as anything impossible for AI within a few years, now that models are multimodal and they can work with increasing amounts of text (e.g. medical histories).
postexitus
No. There is no area of medicine where a boost in productivity will cause doctors to have idle time. The wait times may decrease, throughput may increase, diagnosis accuracy may improve, even costs may decrease (press x to doubt) but no, there will never be a case where we will need less radiologists.
whynotminot
Which may take us to a sort of “Jevens Paradox” kind of place except for medical care.
Like there are times already where I’ve put off or not sought medical care because of the hassle involved.
If I could just waltz into the office and get an appointment and have an issue seen to same day I would probably do it more often.
seesthruya
There is a national shortage of radiologists in the US, with many hospital systems facing a backlog of unread cases measuring in the thousands. And, the baby boomers are starting to retire, it's only going to get worse. We aren't training enough new radiologists, which is a different discussion.
Askl to your question on where my confidence stems from, there are both legal reasons and 'not being able to do the work' reasons.
Legal is easy, the most powerful lobby in most states are trial attorneys. They simply won't allow a situation where liability cannot be attached to medical practice. Somebody is getting sued.
As to what I do day to day, I don't think I'm just matching patterns. I believe what I do takes general intelligence. Therefore, when AI can do my job, it can do everyone else's job as well.
mullingitover
> We aren't training enough new radiologists, which is a different discussion.
About that, I think the AMA is ultimately going to be a victim of its own success. It achieved its goal of creating a shortage of medical professionals and enriching the existing ones. I don't think any of their careers are in danger.
However, long term, I think magic (in the form of sufficiently advanced technology) is going to become cost effective at the prices that the health care market is operating at. First the medical professionals will become wholly dependent on it, then everyone will ask why we need to pay these people eye-watering sums of money to ask the computers questions when we can do that ourselves, for free.
reissbaker
The trial lawyer angle doesn't seem accurate. Did trial lawyers prevent pregnancy tests from rolling out? COVID tests? Or any other automatic diagnostic, as long as it was reasonably accurate?
Not as far as I know. Once an automated diagnostic is reasonably accurate, it replaces humans doing the work manually. The same would be true of anything else that can be automatically detected.
No comment on whether radiology is close to that yet, although I don't think a few-million-param neural network would tell us much one way or another.
6stringmerc
A big wrinkle in AI evangelism is that proponents don’t understand the concept of human judgment as a “learned” skill - it takes practice and AI models / systems do not suffer consequences the way humans do. They have no emotions and therefore can not “understand” the implications of their decisions.
For context, generative AI music is basically unlistenable. I’ve yet to come across a convincing song, let alone 30 seconds worth of viable material. The AI tools can help musicians in their workflow, but they have no concept of human emotion or expression and it shows. Interpreting a radiology problem is more like an art form than a jigsaw puzzle, otherwise it would’ve been automated long ago (like a simple blood test). Like you note, the legal system in the US prides itself on “accountability” (said tongue in cheek) and AI suffers no consequences.
Just look how well AI worked in the United Healthcare deployment involving medical care and money. Hint: stock is still falling.
dingnuts
if the cost for preventative scans goes down, demand will rise. medical demand is incredibly constrained by price. people skip all kinds of tests they need because they can't afford it. the radiologists will have more work to do, not less.
bparsons
There is a perpetual shortage of these types of technicians, so it is unlikely that demand for those jobs will drop.
tarunkotia
I worked on an autocontouring model but we could not get very high accuracy for it to be adopted commercially. The algorithm would work for some organs but would totally freak-out on the others. And if the anatomy was out of norm then it would not work at all. This was 5 years ago, I see Siemens [0] has a similar tool. I remember shadowing a dosimetrist contouring all the Organs-At-Risk (OAR) and it took about 3-4 hours to contour one CT image of thoracic region. Do you know how much better the autocontouring tools have become?
[0] https://www.siemens-healthineers.com/en-us/radiotherapy/soft...
jjtheblunt
(great username: a radiologist with "seesthruya")
HelloMcFly
As my wife says: "Until it's as easy to sue AI as it is doctors, we probably won't see AI replacing doctors."
newyankee
May be in the West. However more practical countries like China with a huge population and clear incentive to provide healthcare to a large population at reduced cost will have incentives to balance accuracy and usefulness in a better way.
My personal opinion is that a lot of Medical professionals are simply gatekeeping at this point of time and using legal definitions to keep changing goalposts.
However this is a theme that will keep on repeating in all domains and I do feel that gradual change is better than sudden, disruptive change.
laborcontract
> May be in the West. However more practical countries like China with a huge population and clear incentive to provide healthcare to a large population at reduced cost will have incentives to balance accuracy and usefulness in a better way.
This is a really interesting point that I haven't considered. Namely, regulatory arbitrage is going to yield enormous benefits in the medical AI space. The sheer amount of data needed to train the model requires data centralization the west has no desire to move toward. But if China does crack the nut, it seems like it will necessarily create an upheaval in the west, whether we like it or not.
candiddevmike
AI in healthcare is going to add so many layers of indirection for malpractice lawsuits. You'll spend years and lots of $$$ trying to figure out who the defendant would ultimately be, only for it to end up being a LLC that unfortunately just filed for bankruptcy.
educasean
The worry isn't that you'll find an AI sitting on the chair that a radiologist used to sit. It's that the entire field of radiology gets reduced down to a button click on a software.
The other doctors will still be there for you to sue.
ogogmad
What if ppl just bought the equipment and did the scans at home?
mikestew
So the question is, “what if people bought an x-ray machine (affordably available on Amazon)and started using it without training on radiological safety”?
ceejayoz
Have you priced out a CT scanner and MRI?
Will you be able to source a radioactive source for your x-rays?
M95D
Respectfully, it doesn't matter what you expect or think. What matters is this:
- If the law allows AI to replace you.
- If the hospital/company thinks [AI cost + AI-caused law suits] will be less expensive that [your salary + law suites caused by you].
I'm almost in the same situation as you are. I have 22 years left until retirement and I'm thinking I should change my career before I'm too old to do it.dang
> it doesn't matter what you expect or think
Can you please edit out swipes like that from your HN posts? (Prepending "respectfully" doesn't help much.) This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
The rest of your comment is just fine of course.
seesthruya
I agree with you fully.
And, I didn't say I would never be replaced. I said I would finish my career, which is approximately 10 more years at this point.
candiddevmike
What career would you change to that would be safe, given the conditions you provided and your time horizon?
jerf
The original author of the paper about the technological singularity [1] defines it as simply the point where predictions break down.
If AI gets to the point where it is truly replacing radiologists and programmers wholesale, it is difficult to tell anyone what to do about it today, because that's essentially on the other side of the singularity from here. Who knows what the answer will be?
(Ironically, the author of that paper, being also a science fiction author, is also responsible for morphing the singularity into "the rapture for nerds" in his own sci-fi writing. But I find the original paper's definition to have more utility in the current world.)
[1]: https://accelerating.org/articles/comingtechsingularity
null
pc86
Crime?
janice1999
Are AI models able to detect abnormalities that even an experienced radiologist can't see? i.e. something that would look normal to a human eye but AI correctly flags it for investigation? Or are all AI detections 'obvious' to human eyes and simply a confirmation? I suspect the latter since it was human annotated images the model was trained on.
seesthruya
Depends on what you mean by 'see'.
For example, let's say I'm looking at a chest x-ray. There is a pneumonia at the left lung base and I am clever enough to notice it. 'Aha', I think, congratulating myself at making the diagnosis and figuring out why the patient is short of breath.
But, in this example, I stop looking closely at the X-ray after noticing the pneumonia, so I miss a pneumothorax at the right lung apex.
I have made a mistake radiologists call 'satisfaction of search'.
My 'search' for the patient's problem was 'satisfied' by finding the pneumonia, and because I am human and therefore fundamentally flawed, I stopped looking for a second clinically relevant diagnosis.
An AI module that detects a pneumothorax is not prone to this type of error. So it sees something I did not. But it doesn't see something that I can't see. I just didn't look.
ceejayoz
This is definitely a thing.
https://www.npr.org/sections/health-shots/2013/02/11/1714096...
I'm skeptical to the claim that AI isn't prone to this sort of error, though. AI loves the easy answer.
alabastervlog
> I have made a mistake radiologists call 'satisfaction of search'.
Ah, now I have a name for it.
When I've chased a bug and fixed a problem I found that would cause the observed problem behavior, but haven't yet proven the behavior is corrected, I'm always careful to specify that "I fixed a problem, but I don't know if I fixed the problem". Seems similar: found and fixed a bug that could explain the issue, but that doesn't mean there's not another one that, independently, would also cause the same observed problem.
bilbo0s
I've been going to RSNA for over 25 years, in all that time, the best I've seen from any model presented to me was the smack the radiologist on the head and say, "you dummy, you should have seen that!" model.
That is, the models spot pathologies that 99.9999% of rads would spot anyway if not overworked, tired, or in a hurry. But, addressing the implication of your question, the value is actually in spotting a pathology that 99.9999% of rads would never spot. In all my years developing medical imaging startups and software, I've never seen it happen.
I don't expect to see it in my lifetime.
SketchySeaBeast
I'm sure it's a matter of training data, but I don't know if it's a surmountable problem. How do you get enough training data for the machine to learn and reliably catch those exceptions?
seesthruya
I have a fairly strong background in tech, and I've been programming computers since 1979 when my dad bought me a TRS-80. Tape drives FTW!
I agree with almost everything you've said here.
Except 'not in my lifetime', because I plan on living for a very long time, and who knows what those computer nerds will come up with eventually ;-)
d_burfoot
The key to the power of the LLM is that the training process can learn effectively from vast corpora of unlabelled text. Unfortunately, there is no comparably vast database of medical images.
In order to "crack" radiology, the AI companies would need to launch an enormous data collection program involving thousands of hospitals across the world. Every time you got an MRI or X-Ray, you would sign some disclosure form that allowed your images to be anonymously submitted to the central data repository. This kind of project is very easy to describe, but very difficult to execute.
seesthruya
I agree with you, but here is where things get tricky:
Everyday I see something on a scan yhat I've never seen before. And, possibly, no one has ever seen before. There is tremendous variation in human anatomy and pathology.
So what do I do? I use general intelligence. I talk to the patient. I talk to the referring doctor. I compare with other studies, across modalities and time.
I reason. I synthesize. I think.
So my point is, basically, radiology takes AGI.
physicsguy
They’ll have better luck in countries like the U.K. where medical data is at least somewhat more organised by virtue of being under the NHS umbrella
justlikereddit
>Unfortunately, there is no comparably vast database of medical images.
Even a tiny hospital with radiology services will produce many thousands of images with accompanying descriptions every year. And you are allowed to anonymize and do research on these things in many places as neither image nor accompanying description is a personal identifier.
So this is yet another Hinton-ish prediction, any time soon radiologist are going dodo. This time LLMs will crack the nut that image recognition have failed at for 20 years.
Where LLMs have succeeded is in doing hot takes that miss the mark, they should be really good at cornering the "prematurely predicting demise of radiologist"-market
EcommerceFlow
So are datasets/currently available data the limitation here?
Let's say a major healthcare leak occurred, involving millions of images and associated doctor notes, diagnostics, etc... would this help advance the field or is it some algorithmic issue?
null
airstrike
[delayed]
pj_mukh
"The staff has grown 55 percent since Dr. Hinton’s forecast of doom, to more than 400 radiologists."
Wonder what other forecasts of doom he is wrong about :|.
bobowzki
The thing I find most interesting about ML in radiology is that a computer can observe the entire dynamic range of the sensor at once. A human will only look at a window or a compressed window.
husarcik
This is a very key point. Perhaps that discrepancy can be leveraged in image generation to save time.
bilbo0s
It already is, which is why rads input window/level settings.
null
null
null
sulam
“Radiologists do far more than study images. They advise other doctors and surgeons, talk to patients, write reports and analyze medical records. After identifying a suspect cluster of tissue in an organ, they interpret what it might mean for an individual patient with a particular medical history, tapping years of experience.”
Now think about how much of software development is typing out the code vs talking to people, getting a clear definition of the problem, debugging, etc. (I would love an LLM that could debug problems in production — but all they can do is tell me stuff I already know). Then layer on that there are far more ideas for what should be built than you have time to actually build in every organization I’ve ever worked in.
I’m not worried about my job. I’m more worried my coworkers won’t realize what a great tool this is and my company will be left in the dust.
null
I just want to point out that the term "A.I." gets used pretty loosely in these articles, as if A.I. is a monolithic commodity that you plugin to your software to make it do chatGPT.
The example in the article is an in house developed "A.I." to help radiologists assess images. Digging a bit deeper it seems they are using mostly old CNN type architectures with a few million parameters.[1]
I think it still remains to be seen what a 1T+ parameter transformer trained specifically for radiology will do. I think anyone would be confident that a locally run CNN will not hold a candle to it.
[1]https://mayo-radiology-informatics-lab.github.io/MIDeL/index...