Skip to content(if available)orjump to list(if available)

Mayo Clinic's secret weapon against AI hallucinations: Reverse RAG in action

natnat

Can someone link to a real source for this? Like, a paper or something? This seems very interesting and important and I'd prefer to look at something less sketchy than venturebeat.com

beebaween

Curious if anyone has attempted this in an open source context? Would be incredibly interested to see an example in the wild that can point back to pages of a PDF etc!

theodorewiles

If I had to guess it sounds like they are using CURE to cluster the source documents, then map each generated fact back to the best-matching cluster, and finally test whether the best-matching cluster actually provides / supports the fact?

pheeney

I'd be curious too. It sounds like standard RAG, just in the opposite direction than usual. Summary > Facts > Vector DB > Facts + Source Documents to LLM which gets scored to confirm the facts. The source documents would need to be natural language though to work well with vector search right? Not sure how they would handle that part to ensure something like "Patient X was diagnosed with X in 2001" existed for the vector search to confirm it without using LLMs which could hallucinate at that step.

unstatusthequo

Already exists in legal AI. Merlin.tech being one of those that provides citations to queries to validate the LLM output.

eightysixfour

Plenty provide citations, I don’t think is exactly what Mayo is saying here. It looks like they also, after the generation, lookup the responses, extract the facts, and score how well they matched.

hn_throwaway_99

Can someone more versed in the field comment on whether this is just an ad or actually something unique or novel.

What they're describing as "reverse RAG" sounds a lot to me like "RAG with citations", which is a common technique. Am I misunderstanding?

htrp

at that point it becomes a search problem?