Author: John Farley

null

The Gallagher Cyber practice remains focused on emerging technology and the potential for increased risks as organizations begin to use it. Throughout 2025, we're concentrating on evolving compliance requirements for the use of artificial intelligence (AI). Recent AI-specific regulatory proposals in the state, federal and international arenas — including industry-specific regulations — bear watching.

US state AI legislation

Recent developments in AI regulation at the state level have seen nearly half of US states proposing or adopting AI governance legislation. Notably, Colorado has passed the first omnibus-style AI regulation with the Colorado Artificial Intelligence Act.1 It goes into effect on February 1, 2026, and is likely to set the tone for other states considering similar regulation.

This Act focuses on regulating high-risk systems and preventing algorithmic discrimination, with distinct and separate responsibilities for both developers and deployers of AI systems. It focuses on AI systems that may impact consequential decisions that have a material impact on areas related to employment, financing, health services, housing or insurance.

AI developers will be required to maintain accountability for known or foreseeable risks within their AI system. They'll be required to report to the Colorado attorney general and known deployers within 90 days of discovering or being made aware of the occurrence of algorithmic discrimination.

AI deployers will be required to exercise reasonable care when using AI systems. They must implement risk management programs, conduct periodic risk assessments, provide individuals mechanisms to contest decisions and have certain reporting requirements to both the public and to the Colorado attorney general.

The general trends in the other state sponsored bills cover four areas:2

  • Consumer protections when AI is used for profiling and automated decisions
  • Use of AI for hiring and in employment contexts
  • Deceptive media or "deepfakes," which are further sub-categorized by specific types of individuals (for example, public figures or minors) as well as activities (for example, election related or sexually explicit)
  • Forming AI task forces or groups devoted to understanding AI impacts

Ultimately, we expect the trajectory of AI regulation to mirror the evolution of recent data privacy laws across the US.

US federal and industry sector regulation

Recent executive orders have emphasized nuanced and differing focuses, shifting from ensuring the safety and privacy of AI to removing barriers to innovation in private sector development.

As of this writing, Congress has issued more than 100 bills related to AI use. Most focus on transparency and accountability aimed at consumer protection, while others target specific industries, including marketing, healthcare and education, among others.

The Federal Trade Commission has issued guidelines on AI transparency and accountability, emphasizing the need for clear documentation and consumer consent.3

The National Institute of Standards and Technology (NIST) continues to play a pivotal role in developing AI governance and technical standards, including guidelines for privacy-enhancing technologies. Specifically, they highlight both governance and technical guidelines to promote privacy-enhancing technologies.4

Industry-specific regulations

The US has focused on AI guidance across a broad spectrum of industries:

Healthcare: The Health Insurance Portability and Accountability Act (HIPAA) has been updated to include AI-specific guidelines, ensuring that AI applications in healthcare maintain patient confidentiality and data security.5

Finance: The Financial Industry Regulatory Authority (FINRA) has introduced AI compliance standards that require financial institutions to implement robust risk management frameworks to prevent fraud and ensure algorithmic fairness.6

Other industries: The International Organization for Standardization (ISO) has developed standards for AI applicable to multiple industries, including nonprofits and municipalities.7 The standards focus on safety, quality control and the ethical use of AI in automated processes.

International AI regulation

Internationally, the EU AI Act8 stands as one of the few comprehensive AI laws and has set the global benchmark to focus on preventing AI risks and harms. It classifies various AI systems based on the level of risk and imposes specific obligations accordingly.

However, its influence hasn't been as widespread as the EU's General Data Protection Regulation (GDPR), with many countries opting to amend existing laws or adopt frameworks for AI governance. These amendments address a broad spectrum of issues, including consumer protections, cybersecurity, national defense and healthcare. Task forces and working groups are actively defining national strategies and ethics to guide future legislation.

Separately, the Organization for Economic Co-operation and Development (OECD) has established international principles for AI,9 focusing on transparency, accountability and human rights. These principles serve as a framework for member countries to develop their own regulations.

Cyber insurance coverage issues

As AI regulations evolve, organizations may face challenges in securing Cyber insurance coverage for AI-related regulatory claims. Key issues include:

Scope of coverage

Insurers may need to redefine coverage parameters to address AI-specific risks, such as algorithmic discrimination and high-risk system failures. A wide variety of losses can manifest from AI systems. These losses could expand insurance coverage discussions from cyber and technology Errors and Omissions (E&O) policies to employment practices liability, product liability, medical malpractice, Directors and Officers (D&O) policies and others.

AI regulatory claims

Heightened regulatory risk has spurred some Cyber insurers to use various methods to reduce their cascading losses for regulatory risk exposure around the use of technology, and AI will only elevate that focus. Some carriers have already modified Cyber insurance policy language to restrict or even exclude coverage for certain incidents that give rise to costs incurred for regulatory investigations, lawsuits, settlements and fines.

AI liability coverage and accountability

Determining liability between AI systems developers and deployers could impact coverage terms and claims adjudication.

Compliance costs

Most Cyber insurance policies provide free or discounted risk consulting services. These policies may adapt to cover some costs associated for compliance with new AI regulations, including AI risk assessments and reporting requirements.

Risk management strategies for AI compliance

Organizations can adopt several risk management strategies as they strive to comply with new and emerging AI regulations, such as the following.

Establish governance structures

Implement robust governance frameworks to oversee AI programs, focused on accountability and transparency. Ensure that AI systems are transparent and that their decision-making processes can be explained to stakeholders.

Conduct risk assessments

Regularly perform risk or impact assessments pre-deployment and periodically thereafter to identify and mitigate potential AI-related risks. Conduct regular audits to identify and mitigate biases in AI systems, ensuring fairness and equity.

Enhance reporting mechanisms

Develop mechanisms for reporting adverse impacts of AI systems to relevant authorities and stakeholders.

Educate and train staff

Train employees on AI ethics, compliance requirements and best practices to foster a culture of responsible AI use.

Engage stakeholders

Coordinate these efforts with additional stakeholders, including customers, business partners and regulators, to build trust and ensure alignment with regulatory expectations.

Continuously monitor and evaluate

Establish mechanisms for continuous monitoring and evaluation of AI systems to support compliance with evolving regulations.

By proactively addressing these areas, organizations will be better positioned to navigate the complex landscape of AI regulations and help mitigate potential risks associated with AI deployment.

Author Information


Sources

1"Consumer Protections for Artificial Intelligence," Colorado General Assembly, accessed 8 Apr 2025.

2Steild, Ryan. "Back to the Future: How Data Privacy Laws Can Teach Us What to Expect With AI Regulation," Constangy, Brooks, Smith & Prophete, LLP, 7 Apr 2025.

3"Artificial Intelligence Compliance Plan," Federal Trade Commission, accessed 8 Apr 2025.

4"AI Risk Management Framework," NIST, , accessed 8 Apr 2025.

5"The Evolving Landscape of Human Research with AI — Putting Ethics to Practice," US Department of Health and Human Services, 9 Jul 2024.

6"FINRA Reminds Members of Regulatory Obligations When Using Generative Artificial Intelligence and Large Language Models," FINRA, 27 Jun 2024

7"ISO/IEC 42001:2023," ISO, 2023. Gated.

8The EU Artificial Intelligence Act," EU Artificial Intelligence Act, accessed 8 Apr 2025.

9OECD Updates AI Principles," ANSI, 9 May 2024.