Artificial intelligence, with its growing use across various sectors, poses a contentious issue for global insurers. The increasing integration of AI tools by numerous businesses has led players in the insurance industry to reconsider their exposure to potential liabilities. As technology evolves, novel and unforeseen challenges emerge, triggering significant hesitation among some of the largest names in insurance regarding coverage of AI-associated risks.
Previously, the insurance sector has adapted to technological advancements by adjusting coverage options and premiums proportionally to the perceived level of risk. However, AI introduces unfamiliar complexities, making it more unpredictable. The lack of historical data related to AI misjudgments and the ambiguity in assigning blame complicates the matter further. Insurers’ past strategies of revising terms might not automatically apply to contemporary AI issues, given its unique implications and varying operational modes.
What Actions Are Insurers Taking?
AIG, WR Berkley, and Great American are seeking approval from U.S. regulators to limit their liability by excluding risks pertaining to AI applications, like virtual agents and chatbots. WR Berkley is aiming to block claims arising from both actual and alleged AI use, amid technological mishaps termed “hallucinations.” These errors, coupled with AI’s expanding application, amplify potential claim opportunities, as noted by Dennis Bertram from Mosaic, who indicates the unpredictable nature of AI output makes it difficult to insure.
Why Is AI Considered Risky by Insurers?
The inherent unpredictability and lack of transparency of AI’s decision-making processes render it a “black box,” raising apprehensions among insurers. Rajiv Dattani, from the Artificial Intelligence Underwriting Company, underlines accountability concerns, questioning liability when errors occur. Such uncertainties compel cautious moves by insurers, especially regarding advanced language models like OpenAI’s ChatGPT.
In a statement, AIG expressed the view that generative AI’s broad applicability could increase the likelihood of triggering claims over time.
“Generative AI was a ‘wide-ranging technology’ and the possibility of events triggering future claims will ‘likely increase over time,’” they stated.
Despite this, AIG has not decided to implement the exclusions immediately, suggesting a tentative approach as insurers navigate this dynamic landscape.
Meanwhile, both companies using AI and insurance providers are grappling with accountability. AI’s integration resulting in errors—such as Virgin Money’s chatbot incident and Air Canada’s fabricated discount case—puts businesses at risk. Addressing these challenges, Kelwin Fernandes of NILG.AI questions human accountability amidst increasing dependence on AI systems.
“If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?” posits Fernandes.
These sentiments reflect broader industry concerns, necessitating robust frameworks and clarity in accountability.
Insurers’ reassessment of AI-related policies highlights the significant challenges and risks posed by this rapidly advancing technology. As AI applications penetrate deeper into corporate operations, the insurance landscape will continue to evolve to address these risks appropriately. Comprehensive guidelines may emerge to delineate responsibilities, potentially prompting a reevaluation of current insurance methodologies tailored to AI-specific scenarios.
