When negative events affect people's lives, we reasonably expect insurance markets to do a good job of compensating individuals. As artificial intelligence (AI) models develop new and more varied capabilities, and concerns about their catastrophic risks increase, some may point to insurance as a potential method to help make AI safe. In this blog post, we explain why traditional insurance models fail to address catastrophic risks from AI.
Most decisions in life involve potential risks and rewards. In some cases, we turn to insurance to seek protection against misfortune: we protect our cars and homes from accidents, our holidays from unexpected cancellations, and our health from illness. There are cases, however, when risks impact not just individuals but society as a whole: natural disasters, financial crises, pandemics, climate catastrophes, etc. The risks of advanced technologies such as advanced AI also fall under this category of societal-scale risks.
Modern economics tells us that markets are efficient at setting prices for goods and services that are bought and sold regularly. This is just as true of risks: markets are more efficient at pricing risks when these occur sufficiently predictably and frequently enough to generate quantitative estimates. However, there are some cases in which markets struggle to accurately price risks: when negative outcomes arise from rare events, have widespread impacts, or result from externalities.
Rare events pose challenges for insurance companies when pricing insurance policies because there is limited information to assess their potential costs. For instance, a pandemic might have a very large, or relatively small impact on GDP (sometimes depending on how authorities respond to it). Therefore, it is not possible to accurately estimate its costs in advance.
Similarly, if risks materialize for nearly everyone at once, it is often not possible for insurance companies to account for that possibility and honor every single claim. Take the devastation brought about by war in a given country as an example. In this case, insurance cannot simply make the costs of war go away.
So-called 'externalities' also make market prices less helpful for society. By 'externalities,' we refer to the consequences of individual or group behaviors that affect other parties and are not reflected in market prices. Consider, for instance, a factory whose pollutants cause residents in the area to pay for asthma treatments and farmers to experience crop damage from acid rain. When markets do not penalize companies or individuals for behaviors that can negatively affect others, people make decisions without fully considering all potential implications of their actions.
Catastrophic risks arising from the development of AI systems fall under two categories of risks that insurance markets struggle to price: they arise from externalities, and they materialize for nearly everyone simultaneously.
Firstly, let us examine the problem of externalities. Although the development of advanced AI models contributes to aggregate risks, markets do not impose any penalty for these risks on their own. This, in turn, encourages companies to bet on much riskier development.
It would be tempting to argue that this externality could be adequately mitigated by forcing AI developers to take out appropriate insurance against catastrophic outcomes. In doing so, companies would be forced to confront the costs of their potential contribution to aggregate catastrophic risk. However, catastrophic events occur too infrequently for us to be able to price their expected costs. Without this information, insurers cannot properly assess how much risk each person or company should bear to promote appropriate risk-taking.
Forcing a market to price the risk of these events would, in fact, create a strong incentive for companies to sell insurance products that they could have no reasonable expectation of fulfilling. In other words, it would create the illusion that risks are being mitigated, but the insurance price would not reflect the actual underlying risk, leading to a false sense of security.
Secondly, large catastrophic risks are equivalent to everyone making an insurance claim simultaneously. When everyone experiences the negative impact at once, it becomes impossible for the insurance company to pay out in full. This means large aggregate risks are uninsurable in practice: insurers would either offer very limited coverage or underprice risk, mistakenly assuming they could handle the stream of insurance claims.
One example of this phenomenon was the failure of AIG, a large American insurer, during the financial crisis of 2007/08. AIG had offered narrow protection for specific financial products, expecting that the frequency and costs associated with these failures were well-understood. As the banking crisis unfolded, an excessively large number of claims came in simultaneously because the events were correlated in especially adverse market conditions. The company could not fulfill its obligations in full and went bankrupt. This case demonstrates how even limited insurance coverage can collapse when linked to a broader risk of system-wide failure.
Attempting to protect against the risks of AI systems could lead to a similar scenario. Suppose a foundational AI model is adopted across many industries, from healthcare diagnostics to autonomous vehicles, or supply chain management. While the model generally produces reliable outputs, it occasionally generates fictional data to support business-critical analysis. This eventually leads to large-scale failures across multiple industries.
A specialized insurer may well be faced with an unsustainably large number of simultaneous claims, resulting in the insurer’s eventual failure. This scenario is relatively benign compared to the full potential risks of AI technology. Even in this instance, we can see how it could result in insurers being completely wiped out and claims left unhonored.
In conclusion, traditional insurance models lack the capacity to handle catastrophic risks from AI. Forcing markets to price these risks would only create an incentive for companies to sell insurance products that they could have no reasonable expectation of fulfilling. This is because the financial resources required to cover such large-scale losses are beyond what any single insurance company — or even a collective of them — can provide.
Instead, handling catastrophic risks from AI largely depends on government officials deciding which risky activities should be legally permitted and under what circumstances. Regulators can and often do impose particular restrictions on the development of a given activity, and they carefully restrict its deployment to the wider public until certain safety thresholds have been met and safeguards put in place.
Who owns controlai.news and who is paying for all these ads on X?