Skip to content(if available)orjump to list(if available)

Underwriting Superintelligence

Underwriting Superintelligence

29 comments

·July 15, 2025

janalsncm

> As insurers accurately assess risk through technical testing

If that’s not “the rest of the owl” I don’t know what is.

Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.

1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.

2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.

3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.

brdd

Thanks for the thoughtful response! Some replies:

1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.

2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.

3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.

xmprt

This only works if there are negative consequences faced by the insured parties when things go wrong. If all the negative consequences are faced by society and there are no regulations that incur that burden on the companies building AI, then we'll have unchecked development.

brdd

We agree! Unchecked development could lead to disaster. Insurers can insist on adherence to best practices to incentivize safe practices. They can also clarify liability and cover most (but not all) of the risk, leaving the developer on the hook for a portion of it.

evertedsphere

> But we don’t want medical device manufacturers or nuclear power plant operators to move fast and break things. AI will quickly get baked into critical infrastructure and could enable dangerous misuse.

nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them

this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant

sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever

but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation

blibble

> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century.

I never understood this argument

as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI

and it is certainly no reason to try to do everything we possibly can to try and summon a machine god

socalgal2

> I'd rather be under the Chinese boot than having all of humanity under the boot of an AI

That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?

> certainly no reason to try to increase the chance of summoning a machine god

The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.

blibble

> The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?

given Elon's AI is already roleplaying as hitler, and constructing scenarios on how to rape people, how much worse could the Chinese one be?

> The argument is that this is inevitable.

which is just stupid

we have the agency to simply stop

and certainly the agency to not try and do it as fast as we possibly can

mattnewton

> we have the agency to simply stop

This is worse than the prisoner’s dilemma- the “we get there, they don’t” is the highest payout for the decision makers who believe they will control the resulting super intelligence.

socalgal2

"We" do not as you can not control 8 billion people

hiAndrewQuinn

If you financially penalize AI researchers, either with a large lump sum or in a way which scales with their expected future earnings, take you pick, and pay the proceeds to the people who put together the very cases which lead to the fines being levied, you can very effectively freeze AGI development.

If you don't think you can organize international cooperation around this you can simply put such people on some equivalent of an FBI type Most Wanted list and pay anyone who comes forward with information and maybe gets them within your borders as well. If a government chooses to wave its dick around like this it could easily cause other nations to copy the same law, this instilling a new global Nash equilibrium where this kind of scientific frontier research is verboten.

There's nothing inevitable at all about that. I hesitate to even call such a system extreme, because we already employ systems like this to intercept e g. high level financial conspiracies via things like the False Claims Act.

socalgal2

In my world there are multiple countries who each have an incentive to win this race. I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out. You're dreaming if you think you could actually get all the players to co-operate on this. It's like expecting the world to come together on climate change. It's not happening and it's not going to happen.

Further, it doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept and does not take a data center.

MangoToupe

> The options are under the boot of a Western AI or a Chinese AI.

This seems more like fear-mongering than based on any kind of reasoning I've been able to follow. China tends to keep control of its industry, unlike the US, where industry tends to control the state. I emphatically trust the chinese state more than out own industry.

null

[deleted]

gwintrob

I'm biased because my company (Newfront) is in insurance but there are a lot of great points here. This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."

There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.

blibble

> This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."

and by 2040 it will be $5000 trillion!

and by 2050 it will be $5000000 quadrillion!

gwintrob

Ha, of course. A lot easier to forecast in a spreadsheet than actually make this happen. Based on the progress in AI in the past couple years and the capabilities of the current models, would you bet against that growth curve?

blibble

yes, there's not $5 trillion of dumb money spare

(unless softbank has been hiding it under their mattress)

lowsong

This article is a bizarre mix of center-right economic ideas and completely unfounded assumptions about the nature of AI technology, to the point where I'm genuinely not sure if this is intended as parody or not.

> We’re navigating a tightrope as Superintelligence nears.

There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.

The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.

> If the West slows down unilaterally, China could dominate the 21st century.

Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".

> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.

This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.

> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.

There is no evidence this is the case, and no citation is even attempted.

Animats

For this to work, large class actions are needed. If companies are liable for large judgements, companies will insure against them. If not, companies will not try to avoid harms for which they need not pay.

choeger

Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?

So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.

There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?

Or aren't they actually limited by the confines of what we as a species already know?

roenxi

You seem to be part of a trend where most humans are defined as unintelligent - there are remarkably few people out there capable of formulating novel algorithms or physics hypothesises. Novels there are a few more if we admit unreadable slop produced by people who really should choose careers other than writing. It speaks to the progress that machines have made that traditional tests of intelligence, like holding a conversation or doing well on an undergraduate-level university test, apparently no longer measure anything of importance related to intelligence.

If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.

yahoozoo

> Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?

no

bwfan123

> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century

I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.

brdd

The "Incentive Flywheel" of AI: how insurance unlocks secure Al progress and enables faster AI adoption.

yahoozoo

With no skin in the game, it will be either cool if super intelligence happens or it doesn’t and I just get to enjoy some schadenfreude. Either all of these people are geniuses or they’re Jonestown members.

muskmusk

I love it!

Finally some clear thinking on a very important topic.