Author: John Farley
Gallagher's Cyber practice remains laser focused on emerging technology and the potential for increased risks as organizations engage them. Throughout 2024, we're concentrated on evolving compliance requirements for the use of artificial intelligence (AI). A flurry of recent AI-specific regulatory proposals at the state, federal and international level merit watching.
State AI regulation
The New York Department of Financial Services (NYDFS) has released proposed guidance on the use of AI and external consumer data in insurance underwriting and pricing.1 The proposal is similar to regulations in Colorado and the model bulletin from the National Association of Insurance Commissioners (NAIC). The NYDFS proposal includes sections on unfair discrimination and testing.
Several states — including Oklahoma, Vermont, Virginia and Washington — are considering legislation targeting algorithmic discrimination resulting from AI used in consequential decision-making.2 These decisions have the potential to impact a consumer's access to credit, criminal justice, education, employment, healthcare, housing or insurance. New York is also considering a bill based on Colorado's SB 21-169, which addresses unfair discrimination resulting from insurers' use of external consumer data.
Federal and industry sector AI regulation
The Federal Trade Commission (FTC) has made a recent announcements aimed at companies that develop and host AI models. The post warns that these companies have an obligation to protect users' data, even as they constantly ingest additional data. Failure to honor privacy commitments can result in FTC-enforcement action.
The US Department of Health and Human Services (HHS) has submitted a draft rule addressing clinical algorithms to the Office of Management and Budget (OMB).4 The rule clarifies that health insurers participating in federally funded programs must not discriminate against individuals on protected bases through the use of clinical algorithms. The rule is expected to be released in the first quarter.
The Financial Industry Regulatory Authority (FINRA) has identified AI as an emerging risk in its annual regulatory oversight report. FINRA highlighted concerns about accuracy, privacy, bias and intellectual property related to generative AI tools.5 Member firms are advised to consider these risks and potential regulatory changes.
The House Financial Services Committee has formed a bipartisan working group on AI to examine AI's impact on the financial services and housing sectors.6 The group will assess existing laws and regulations related to AI and consider the potential benefits and risks associated with AI.
Global AI regulation
Progress is being made on the EU AI Act, with final discussions on the technical details and drafting underway as of this writing.7 The aim is to complete the legal text in Q1 2024, although further delays may occur. The UK government plans to publish key tests that must be passed before new AI laws can be enacted.
Risk management strategies
Any organization that these new AI compliance requirements might affect should take steps to communicate the impact to key stakeholders across the enterprise.
Organizations should also be aware of the rapidly evolving Cyber insurance products that may impact the scope of insurance coverage for AI-related losses in 2024. Heightened regulatory risk has spurred some Cyber insurers to use various methods to reduce their cascading losses for regulatory risk exposure around the use of technology. Sub-limits and coinsurance are often imposed for certain cyber losses. In addition, some carriers have modified Cyber insurance policy language to restrict or even exclude coverage for certain incidents that give rise to costs incurred for regulatory investigations, lawsuits, settlements and fines.
Note that many Cyber insurance policies provide some form of cyber risk services, including regulatory compliance guidance. These services can be useful in navigating the complex and evolving AI rules and regulations.
In summary, today's regulation of AI is cutting across multiple industry sectors — including financial services, healthcare, technology, education, real estate and municipalities — and will undoubtedly spread to others in short order. Any organization considering embracing generative AI tools should consider embedding a formal risk management plan for AI usage into their overall enterprise risk management program. A cross-divisional effort between several key stakeholders will be required. Risk managers will need to coordinate efforts between Legal, Compliance, Human Resources, Operations, IT, Marketing and others, while closely monitoring emerging risk as AI systems become more widely used.