Legalyze.ai: Review Medical Records with AI
32 comments
·January 24, 2025_DeadFred_
chrisford
Thanks for your comment! Medical chronologies are already very common across personal injury law and other legal practice types. The problem is that paralegals are spending days, if not weeks, combing through thousands of medical records and entering them into an Excel spreadsheet. This automates most of that task.
ttyprintk
Actually, Siemens Healthineers could do three things:
1. Style transfer from one standard to another. Would a particular history have resulted in this FHIR v5 data when it was written under FHIR v4?
2. The ability to incorporate feedback for de-identification. In court, that must be explained as an accidental parallel construction rather than an oversight flaunting GDPR; or worse, a hallucination substituting in training data. Siemens could build a product that proves so.
3. Automatically cache pre-filled chronologies. This is easier than the other two, and what I’d expect lands on someone’s desk. By pre-forming the (usually expensive) paralegal material, a doctor or administrator can preview the legal case they’re up against. And alternatively, a plaintiff can claim that a hospital or doctor was aware of the risk of a pre-existing condition. Siemens mostly speaks in risk.
getwiththeprog
And the companies that can afford to hire experienced humans keeps winning law-suits. Until the judiciary is automated.
ttyprintk
These cases often bring in medical records experts when one side disputes the scope or completeness of their access. That’s where I’d expect AI to debut.
the_sleaze_
Almost zero chance this technology doesn't break HIPAA law somehow.
> our data is stored securely and encrypted on servers under Legalyze.ai's control.
If you found out a law firm is sending your medical records to a third party without any prior BAA or consent....
alwa
Then again if it's your law firm, the one who's representing you; and doing so lets them build your case at half the cost in research hours... or lets them figure out whether they can take your case where it might not have been worth their time to see before? I don't know much about how these things work, but I could see where people I know might consent to such a thing.
ttyprintk
Today, both sides already have some level of access to medical records; could be a judge’s decision.
And when it comes to medical records for people unrelated to the lawsuit, using de-identified cases is not a violation of HIPAA. The question is: can we use AI on those full cases and de-identify afterward? Is using AI on de-identified cases allowed (because the de-identification process can mess with chronology)?
agieocean
And HIPAA law actually matters so I don't think this would fly under the radar
marcinzm
Even if it was covered it will definitely fly under the radar for the next four years at least.
tiahura
Only “covered entities” and their “business associates” fall under HIPPA. Most firms don’t meet the definition.
chrisford
This is correct. Also, when a law firm client goes under contract with a firm, they often sign HIPAA-compliant medical-release documentation.
Some software providers in the AI medical space are conservative here and have customers sign a BAA directly on signup.
alwa
I guess the profit is where the hazards lie! Skimming your landing page, it sounds like you're making meaningful efforts to compensate for the aspects of LLM that horrify us around here: sounds like you're primarily providing direct references to the "needles" in the case record haystack, rather than synthesized "insights"; serving the legal professionals themselves with a specific mundane research task of theirs rather than playing lawyer for consumers or purporting to produce polished results.
Better you than me, but the very best of luck to you, and congratulations on your launch.
chrisford
Yes, we are not using the dreaded RAG to accomplish this task, although we do have it available if a customer wants to chat with their case documents.
ttyprintk
I get why you’d want to distinguish yourself from competition that relies heavily on RAG, but “chatting” is putting it mildly.
I’d use RAG with prompts like “what was billed but did not produce a record”. Or rehydrating the context of, for example, the hospital’s own model for predicted drug interactions. I could see it being lucrative if that model produced those results without traceability.
toasteros
Good lord. Two areas we really do not want AI in its current form touching somehow put together in one convenient little package.
chrisford
It's a productivity AI agent meant for workers who were already dealing with medical records and creating medical chronologies manually, primarily in the legal space.
Yoric
Do I understand correctly that this is an AI that will be used to sue or otherwise take law-related decisions? What can go wrong?
chrisford
No, everything is human-monitored. There is a startup working on AI arbitration though - https://judgeai.space/
threecheese
Politely, stick it.
chrisford
Why? We are just another AI agent trying to help humans automate tedious work.
threecheese
Specifically, because you will require vast human health records to train your model, and that model will interact with my health records, and I trust you just about as far as I can throw you as a steward of my or the public’s data. You will intentionally or accidentally expose me to risk, with no meaningful punishment.
Now as a person, of course I’m sure you are kind and responsible and we’d have a lovely lunch (and I mean that). It sounds like a fascinating problem to solve. As a group though, acting within a regulatory regime that doesn’t value privacy at all - excepting that one law from 1996 with more holes than my socks - you just can’t be trusted.
Would you claim personal responsibility for any downside risks your product introduces, in a “like for like” manner with respect to the actual damage caused? Like if a doctor relying on your product caused a death?
null
giantfrog
Looks like an efficient way to lose a lawsuit.
chrisford
It finds details that are hard for humans to see. It won't get fatigued when looking through thousands of records and stays consistent.
tiahura
What can this do that a quality prompt w/ ChatGPT or Claude can’t do?
chrisford
ChatGPT or Claude can't review thousands of medical records at once.
doctorpangloss
Ha ha, the problem is that perps have no insurance, and victims are driving around with 30k/15k caps, not too much documentation.
I remember working in the operating room space. I noticed some sites had cameras built into their lights. I asked if they wanted to be able to capture images timestamped to entries in the case record. Everyone was adamant they did not want that as it would open them up to too much subjective liability.
We have a hard time capturing enough information to be useful and understand things. I didn't realize before now but AI is going to make that much much worse. In aerospace we had yearly training from out insurance company lawyers on how to record information into our systems, format emails, etc. It was interesting getting to interface with high power Lloyds lawyers but also surreal/kinda wrong.
You probably mean well. Something something road something something good intentions.
Meta (not the company) AI product idea from this. AI to skim your reports and make sure there is nothing that a 'legal AI' could reconstruct into a lawsuit narrative. It scans all medical records before saving and recommends verbiage adjustments. (You're welcome dev at Oracle reading this needing to respond to an 'Larry needs medical software value propositions for AI' email). Begun the AI wars have. Why stop there with my above examples. Siemens needs a legal AI for PLM at the least as manufacturing engineer defect notes could definitely be interpreted in legally sketchy ways so should be auto scanned/rejected/massaged. On weird note on one part that went through MRB/NCR that ended up in a crashed plane could be trouble.