Skip to content(if available)orjump to list(if available)

Insurance for AI: Easier Said Than Done

baobun

> "can you really trust accounting software not to make mistakes? Won’t there be edge-cases in mortgage underwriting that software might miss, but an experienced underwriter would catch?" The proof is in the pudding: the world runs on software now.

Indeed.

https://en.wikipedia.org/wiki/Post_office_scandal

janice1999

See also the Australia Robodebt scheme, which also claimed many lives: https://en.wikipedia.org/wiki/Robodebt_scheme

jlarocco

Wow, those two articles are as Kafka-esque as it gets. What a nightmare it must have been for the people caught up in them.

doctorpangloss

Why did they defend Horizon though? The Wiki article offers a POV from someone who was functionally powerless:

> We have to be careful, that we are not creating a cottage industry that damages the brand and makes clients like the DWP and the DVLA think twice. The DWP would not have re-awarded the Post Office card account contract, which pays out £18 billion a year, in the last month if they thought for a minute that this computer system was not reliable

I know that's something that someone said, but is it true? So what if a lot of people say that? Nobody knows who or what leads to sales or not sales. If sales were all that mattered, they wouldn't do the IT upgrade at all.

People use shitty software all the time.

> The new Horizon project became the largest non-military IT contract in Europe.

Also... really doubt that is true.

The Horizon IT report's first volume "will focus on redress (compensation) and the human impact of the Horizon scandal." Okay. But why did people feel so strongly about the technology in the first place? Who gives a fuck about bugs?

BrenBarn

The fact that the tech E&O insurance market is small just says to me that we're not doing enough to police and punish harmful behavior by tech companies. The risk for companies is low because we're not willing to make things more dangerous for them.

bcoates

I don't think the adverse selection problem is anywhere near as serious as the author seems to think--insurance companies aren't selling superior knowledge of their client's situation--they're selling against their client's massively different risk appetite.

I expect home insurance to cost more than it pays out (both in median and mean terms) but I take the negative-value deal to protect against rare financially ruinous outcomes.

Quality underwriting and minimizing adverse selection gives an insurance company a massive advantage over competitor insurance companies but it doesn't make or break the market its own.

I'm also not sold on model provider diversity being the measure of risk diversity-surely most of the risk is coming from application errors and not failures of "safety" tuning of models (which are mostly about preventing LLMs from saying things you wouldn't want in the newspaper--I assume AI E&O isn't interested in ensuring reputation risk)

harrall

I also think the author is looking at it incorrectly.

E&O insurance exists because the client is expecting accuracy, but AI products do not bring any material expectation of accuracy (yet). If there is an error, that is currently part of the product.

There are, of course, cases of material damage such as, e.g. AI in a self-driving vehicle hitting someone or something, that would be insurable, but that would be more about insuring that specific industry rather than E&O.

0xWTF

Gave a Waymo engineer a ride in my FSD Tesla around the Bay recently. The conversation was very illuminating. It seems to me there should be pretty clear corollaries in more traditional engineering markets. Does Black and Veatch take out E&O insurance? Did John McShain take out E&O for the Pentagon? Did Tishman take out E&O on the World Trade Center? Did the University of California take out E&O on the Manhattan Project? Does Boeing or Lockheed take out E&O on aircraft?

null

[deleted]

AndrewKemendo

Ultimately this will be like everything else in law (insurance is law by other means) and will be a product of court rulings which means it’s a whack-a-mole game for lenient jurisdictions (like money laundering).

The jurisdiction that hears the case in whatever “justice” system hears it, will set the precedent for all others to reference, based on their alignment with the jurisdiction that uses state power to enforce a resolution.

I expect people will host or make remotely available systems which fall outside of the acceptable limits for whatever regional jurisdiction has their laws.

As usual, pirates and the powerful will steer around those.

blitzar

Real-time bidding a la online ad impressions.

{Task, model, coverage} --> bid.

It can be circular Ai with insurer Ai doing the evaluation and bidding.

WJW

It wouldn't be too hard to set up a marketplace like that, but how do you know your bot is good enough to not accidentally bankrupt the business it's bidding on behalf of? Ads have this wonderful property that individual ads are pretty cheap. If you mis-bid to show an ad to someone not interested it won't cost you very much. Don't do it too often of course, but a few mistakes here and there are fine. In addition, they are bid on and delivered within a second or so at most.

Insurance is very different. Nobody is looking to insure the unit test they vibe coded late Friday afternoon, rather it would be the multi million dollar "we replaced all our accountants with a chatGPT based system" decisions. Getting one of those decisions wrong will absolutely be a problem for your AI-insurance company. In addition, in most cases you won't even know if you were right or wrong until many years later so you have to keep reserves locked up for much much longer.

financetechbro

This would add exponential complexity to the already difficult challenge of insuring AI

doctorpangloss

I don't know. Lots of words to say, "Tort reform for everyone is the right answer, but it hurts my bottom line, so I can't say the intellectually honest and obvious thing."