Author: Laura Zaroski
The launch of open artificial intelligence (AI) tools have enabled users to search vast amounts of data and rapidly create sophisticated responses. However, the evolving abilities of AI and how it will be used at law firms has yet to be fully explored. As the use of this technology becomes more mainstream, underwriters are asking law firms about the use of AI platforms and how firms are controlling the risks associated with this new technology.
The use of research tools is nothing new to the practice of law. Firms have always used search engines, consumer reporting agencies and other legal tools to collect information for opinion letters, briefs and arguments to the court. However, this next-level technology has the ability to significantly assist lawyers with drafting contracts, contract review, regulatory compliance, due diligence, case law searches and even briefs/reports. This is good news for law firms that want to increase their efficiency and work product to clients. However, the results and efficiencies don't come without dangers.
One of the current dangers of AI is possible inaccuracy of the generated responses. AI pulls from a vast amount of data collected over years. However, the data may not be completely current, and therefore might be misleading or incorrect. For example, a text chat AI platform currently pulls data through the end of 2021, so results may not be current nor accurate. AI has been known to cite cases and/or statutes that don't exist and provide fictional analysis of non-existent case law. Accordingly, relying on "caselaw" or legal advice that AI produces may expose lawyers to claims for malpractice and/or dereliction of their duties to their clients.
Another significant concern about AI is the potential disclosure of client confidences or proprietary information. AI relies on the data fed into the tool to create intelligent responses to queries. Feeding client information or secrets into an AI platform places that information onto a third-party platform over which the lawyer has no control. The AI platform can use this information in other search responses. Subsequent AI users might gain access to this privileged or confidential information simply by asking the right questions. This access would run afoul of a lawyer's duty of confidentiality, as well as a lawyer's duty to safeguard client secrets. Further, firms must be mindful as client contracts and engagement letters might govern how a law firm is approved to use clients' confidential and proprietary information.
Law firms must be aware that their legal malpractice carriers are concerned about how law firms have begun to use or how they will allow or approve their practitioners to use these new AI tools. Kim Noble, J.D., vice president of Lawyers and Accountants Liability, Applied Financial Lines explains, "Especially in light of recent events, underwriters want to understand how law firms are using this technology and what safeguards have been put around its use. While some firms will adopt a wait-and-see approach, others are embracing the new technology. The importance of demonstrating to the insurance community an understanding of the technology and a comprehensive set of guidelines for when and for what purpose it can be used safely in the practice of law cannot be understated."
When preparing for renewal, we recommend that law firms prepare to address questions such as:
- How is the firm using AI tools?
- In what capacity and for what purposes is the firm using AI?
- Does the firm have a policy or controls for AI use?
- Are clients advised of the use of this tool?
With the growing concern over the future of AI and certain dangers of its use, we recommend that law firms address these concerns prior to their Lawyers' Professional Liability (LPL) renewals. To ensure that lawyers and law firm employees use AI appropriately, firms should consider drafting and implementing a policy that addresses the following:
- For what type of research is using AI appropriate?
- What safeguards are used to verify that results are accurate and reliable?
- If used for client work, has the client been informed of your use of AI and given prior approval if necessary?
- Can results be given proper attribution or cited to appropriate authorities/authors?
- How does the law firm confirm that no confidential information, client names or proprietary information will be entered into an open AI platform?
Takeaway: The evolution and use of AI will continue to advance and so should a firm's AI usage policy. Law firm management should be aware of the dangers of the use of AI and be diligent in addressing these concerns by implementing an appropriate firm-wide AI policy.
If you have any questions as to the above or the implementation of an appropriate firm policy, please don't hesitate to contact any member of the Gallagher Law Firms insurance and risk management team.