Author: John Farley

The Gallagher Cyber practice remains focused on emerging technology and the potential for increased risks as organizations begin to use it. Throughout 2025, we're concentrating on evolving compliance requirements for the use of artificial intelligence (AI). Recent AI-specific regulatory proposals in the state, federal and international arenas — including industry-specific regulations — bear watching.
US state AI legislation
Recent developments in AI regulation at the state level have seen nearly half of US states proposing or adopting AI governance legislation. Notably, Colorado has passed the first omnibus-style AI regulation with the Colorado Artificial Intelligence Act.1 It goes into effect on February 1, 2026, and is likely to set the tone for other states considering similar regulation.
This Act focuses on regulating high-risk systems and preventing algorithmic discrimination, with distinct and separate responsibilities for both developers and deployers of AI systems. It focuses on AI systems that may impact consequential decisions that have a material impact on areas related to employment, financing, health services, housing or insurance.
AI developers will be required to maintain accountability for known or foreseeable risks within their AI system. They'll be required to report to the Colorado attorney general and known deployers within 90 days of discovering or being made aware of the occurrence of algorithmic discrimination.
AI deployers will be required to exercise reasonable care when using AI systems. They must implement risk management programs, conduct periodic risk assessments, provide individuals mechanisms to contest decisions and have certain reporting requirements to both the public and to the Colorado attorney general.
The general trends in the other state sponsored bills cover four areas:2
- Consumer protections when AI is used for profiling and automated decisions
- Use of AI for hiring and in employment contexts
- Deceptive media or "deepfakes," which are further sub-categorized by specific types of individuals (for example, public figures or minors) as well as activities (for example, election related or sexually explicit)
- Forming AI task forces or groups devoted to understanding AI impacts
Ultimately, we expect the trajectory of AI regulation to mirror the evolution of recent data privacy laws across the US.
US federal and industry sector regulation
Recent executive orders have emphasized nuanced and differing focuses, shifting from ensuring the safety and privacy of AI to removing barriers to innovation in private sector development.
As of this writing, Congress has issued more than 100 bills related to AI use. Most focus on transparency and accountability aimed at consumer protection, while others target specific industries, including marketing, healthcare and education, among others.
The Federal Trade Commission has issued guidelines on AI transparency and accountability, emphasizing the need for clear documentation and consumer consent.3
The National Institute of Standards and Technology (NIST) continues to play a pivotal role in developing AI governance and technical standards, including guidelines for privacy-enhancing technologies. Specifically, they highlight both governance and technical guidelines to promote privacy-enhancing technologies.4
Industry-specific regulations
The US has focused on AI guidance across a broad spectrum of industries:
Healthcare: The Health Insurance Portability and Accountability Act (HIPAA) has been updated to include AI-specific guidelines, ensuring that AI applications in healthcare maintain patient confidentiality and data security.5
Finance: The Financial Industry Regulatory Authority (FINRA) has introduced AI compliance standards that require financial institutions to implement robust risk management frameworks to prevent fraud and ensure algorithmic fairness.6
Other industries: The International Organization for Standardization (ISO) has developed standards for AI applicable to multiple industries, including nonprofits and municipalities.7 The standards focus on safety, quality control and the ethical use of AI in automated processes.
International AI regulation
Internationally, the EU AI Act8 stands as one of the few comprehensive AI laws and has set the global benchmark to focus on preventing AI risks and harms. It classifies various AI systems based on the level of risk and imposes specific obligations accordingly.
However, its influence hasn't been as widespread as the EU's General Data Protection Regulation (GDPR), with many countries opting to amend existing laws or adopt frameworks for AI governance. These amendments address a broad spectrum of issues, including consumer protections, cybersecurity, national defense and healthcare. Task forces and working groups are actively defining national strategies and ethics to guide future legislation.
Separately, the Organization for Economic Co-operation and Development (OECD) has established international principles for AI,9 focusing on transparency, accountability and human rights. These principles serve as a framework for member countries to develop their own regulations.
Cyber insurance coverage issues
Scope of coverage
AI regulatory claims
AI liability coverage and accountability
Compliance costs
Risk management strategies for AI compliance
Organizations can adopt several risk management strategies as they strive to comply with new and emerging AI regulations, such as the following.
Establish governance structures
Implement robust governance frameworks to oversee AI programs, focused on accountability and transparency. Ensure that AI systems are transparent and that their decision-making processes can be explained to stakeholders.
Conduct risk assessments
Regularly perform risk or impact assessments pre-deployment and periodically thereafter to identify and mitigate potential AI-related risks. Conduct regular audits to identify and mitigate biases in AI systems, ensuring fairness and equity.
Enhance reporting mechanisms
Develop mechanisms for reporting adverse impacts of AI systems to relevant authorities and stakeholders.
Educate and train staff
Train employees on AI ethics, compliance requirements and best practices to foster a culture of responsible AI use.
Engage stakeholders
Coordinate these efforts with additional stakeholders, including customers, business partners and regulators, to build trust and ensure alignment with regulatory expectations.
Continuously monitor and evaluate
Establish mechanisms for continuous monitoring and evaluation of AI systems to support compliance with evolving regulations.
By proactively addressing these areas, organizations will be better positioned to navigate the complex landscape of AI regulations and help mitigate potential risks associated with AI deployment.