Artificial Intelligence (AI) is increasingly becoming an integral asset to various industries, offering insights, streamlining processes, and boosting efficiency. However, its widespread adoption comes with significant challenges for businesses, insurers and regulators.
null

The evolving AI risk landscape

The increasing reliance on AI introduces new liability concerns. As AI systems become more autonomous, determining responsibility for damages or harm caused by these systems becomes increasingly complex. Traditional liability models may not be equipped to address the unique risks associated with AI.

For example, the introduction of AI in autonomous vehicles could create liability concerns for certain businesses, mainly due to potential accidents caused by AI malfunctions or errors. Proper risk assessment, transparency, contractual agreements, and adequate insurance coverage are crucial for businesses to navigate this emerging landscape and address liability issues effectively. Even with this mitigation strategy in place, legal framework determines the liability and it remains at all times, whether primary, residual or contingent.

If not carefully designed, AI algorithms can also perpetuate and amplify existing biases, leading to discriminatory outcomes. This presents financial, legal and reputational risks for organisations embracing AI technology. The use of vast amounts of data also raises significant concerns about data privacy and security and the potential compromise or hacking of AI systems could lead to severe financial and reputational damage for organisations.

Personalised advertising is one example of data privacy. Many businesses use AI algorithms to analyse vast amounts of user data for targeted advertising. While effective, this raises concerns about data handling, security, and consent. Businesses must prioritise data privacy, implement robust security measures, and ensure compliance with privacy regulations while providing transparent privacy policies and obtaining user consent.

Moreover, the complex algorithms and models that underpin AI systems raise intellectual property challenges. Determining ownership and protecting these rights poses significant hurdles, especially in cases involving multiple stakeholders.

Insurance considerations

The integration of AI into business operations presents a multitude of insurance challenges that require careful consideration. Cyber threats are a peril in their own right, and traditional risk mitigation strategies often fall short.

As businesses increasingly rely on AI, it becomes imperative to develop robust policies and strategies that address the complexities of cyber risk management. One significant aspect is the potential impact of business continuity, particularly regarding income interruption, infrastructure attacks, and the overall effect on cyber resilience, reputation, and Environmental, Social, and Governance (ESG) credentials.

Another crucial aspect is the proximate cause of risk exposure. AI has the potential to disrupt business operations without causing any physical damage, making it essential to review risk exposure against policy terminology carefully. Businesses must plot and plan coverage responses to ensure they are adequately prepared for AI-related risks.

In terms of parts and maintenance, businesses need to evaluate the compatibility and suitability of AI systems for both current and future needs. It's important to consider the speed and feasibility of replacement, as well as the response to any AI downtime. Additionally, one should assess the impact on business operations and whether insurance coverage extends to all relevant parties, including yourself, suppliers, and customers.

The residual risk of AI failure—particularly in the context of power dependency—cannot be overlooked. Geopolitical events or other disruptions can lead to power outages, which can incapacitate AI systems. Businesses need to evaluate how they have mitigated these risks and whether their operational strategies can recoup associated costs.

Additionally, the ability to adapt to alternative energy sources is a key consideration. Businesses should ensure they are equipped to maintain operations in the face of power disruptions and have contingency plans in place to address such scenarios.

Lastly, the impact of AI on reputation and brand, particularly concerning ESG credentials, is a significant factor. As AI takes over more functions, maintaining a positive reputation and strong ESG standing becomes increasingly challenging. Businesses need to develop strategies that not only address operational risks but also support the preservation of their brand and reputation in an AI-driven landscape.

Need for regulatory alignment

Regulatory considerations will play a critical role in the adoption of AI. Automation is essential for achieving optimal productivity, quality, and competitiveness in modern industries. It enhances safety, reduces costs, speeds up production cycles, and offers scalability and flexibility. However, automation also dictates quality assurance, error prevention, and predictive maintenance.

Effective data collection and analysis are imperative to meet regulatory standards, necessitating careful planning, implementation, and ongoing maintenance. Ensuring that AI systems are optimally integrated with existing frameworks is vital for their long-term compatibility and performance. Businesses must prioritise these factors to ensure regulatory compliance and operational continuity.

In response to these challenges, regulators are grappling with the task of developing adaptable and proportionate regulations to govern the use of AI. While the EU has taken a centralised approach through its Artificial Intelligence Act, the UK government is proposing a more decentralised system, allowing different regulators to tailor their approach according to the specific use of AI in various settings. 1

Core principles for AI developers and users underpin these regulatory efforts. These include ensuring the safe and transparent use of AI and determining legal responsibility and pathways for redress or contestability.

However, the decentralised approach to AI regulation poses the risk of creating uncertainty across sectors, as different regulators may interpret and implement the principles in varying ways. This dynamic regulatory landscape will likely witness increased activity over the next year, with regulators expected to publish AI annual strategic plans this year, charting the course for AI regulation.

In addition, there is a pressing need for international alignment in AI regulation. Despite the recognition of the necessity for regulatory control, there is currently no unified international framework for governing AI, raising key questions about global competitiveness and harmonisation.

Collaboration is key

As AI use continues to expand, insurers will need to grapple with these complex challenges and adapt their products and services to mitigate the risks associated with it effectively. Likewise, regulators will need to work towards a harmonised and adaptive framework for overseeing AI, ensuring that it is used responsibly and ethically across diverse industries.

In conclusion, adequately addressing AI's distinctive challenges requires collaboration between insurers, regulators, and industry stakeholders.


Sources

1A pro-innovation approach to AI regulation: government response,” GOV.UK, 6 February 2024.


Disclaimer

The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.