X
April 2024 Cover Story - Navigating the Legal Risk Landscape of Generative AI

by Colin S. Levy

The legal world is no stranger to innovation, but the rapid rise of Generative AI (GenAI) presents a unique set of challenges and opportunities. While AI tools hold immense potential for streamlining workflows and enhancing legal analysis, their integration into established practices demands careful consideration of the associated risks. This article explores the spectrum of legal risks associated with Generative AI, offering a framework for responsible implementation and ethical decision-making.

The Spectrum of Risk
GenAI tools are not built with legal nuances or contexts in mind. Their outputs, while often impressive, require rigorous human oversight and critical evaluation. To navigate this complex landscape, consider using a context-driven risk framework which categorizes tasks based on their risks. More specifically, consider framing activities in terms of their potential detrimental impacts upon yourself or others. To illustrate, here are some examples and their relative risk-levels.

First, corporate communications and educational content creation would likely fall under a low to moderate risk level. AI solutions can generate text and help improve the flow and organization of text. As with any work product, however, careful human review and proofreading is necessary. These types of activities, while important, do not carry a huge amount of risk since they are mere communications that are not legal advice nor legal research to be relied upon in a brief or other court filing.

Second, legal research would fall under a moderate to high risk level. While AI can expedite research by scouring vast datasets, data privacy and algorithmic bias remain critical considerations. Reliance solely on AI outputs can lead to “hallucinations,” as demonstrated by lawyers fined for relying on fabricated case law. For example, lawyers1 Steven Schwartz and Peter LoDuca of the firm Levidow, Levidow, & Oberman “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question,” U.S. District Judge Kevin Castel wrote2 in an order. The lawyers, Castel wrote, “advocated for the fake cases and legal arguments” even “after being informed by their adversary’s submission that their citations were non-existent and could not be found.” Therefore, human review and validation are paramount to ensuring accuracy and avoiding misleading conclusions.

Third, activities that would carry a moderate to high level of risk would include tasks like providing outright legal advice, drafting a contract, contract review and analysis, and/or crafting a litigation strategy. This is not just because of the high stakes involved in such activities, which include reliance on the accuracy and truthfulness of such documents and analysis by clients, but also the security of data used to drive such work. For example, consider Google’s update earlier this year to its policies to explicitly state that the “company3 reserves the right to scrape just about everything you post online to build its AI tools.”

Responsible AI and the Future of Law
In the legal profession, the judicious use of AI tools, particularly Large Language Models (LLMs) like GPT-4, can be transformative, but it requires a balanced approach that acknowledges both their potential and inherent risks. While the framework described earlier can be useful on a case-by-case basis, perhaps the most effective standard to employ with respect to using Generative AI solutions is what Ethan Mollick has termed the “Best Available Human (BAH)”4 standard to determine when and how AI can be most effectively utilized. For instance, AI can assist in drafting and reviewing documents, conducting legal research, and even predicting litigation outcomes based on historical data. This approach not only streamlines workflow but also potentially enhances the quality of legal services, especially in scenarios where access to specialized human expertise is limited or cost-prohibitive.

However, it’s crucial for legal professionals to recognize the limitations and risks associated with AI. AI systems are prone to generating plausible but inaccurate information, replicating biases present in their training data, and behaving unpredictably in complex legal scenarios. This necessitates a careful, supervised approach to using AI. Lawyers should not rely on AI as a standalone tool for critical decision-making or complex legal analysis. Instead, it should be viewed as a complement to human expertise. Legal professionals must remain vigilant and critically evaluate AI-generated outputs, ensuring that they conform to legal standards and ethical considerations. In sensitive areas like client confidentiality and advice, the role of AI should be cautiously evaluated, as the repercussions of missteps are significant in the legal domain. More specifically, several AI researchers5 have found that information generated by ChatGPT is feasible to show factual errors. Some examples of these errors include producing convincing yet fake articles that do not exist.6 It remains essential for lawyers to double-check their work, especially work created with the assistance of a generative AI solution.

The integration of AI into legal practice should be guided by a pragmatic, ethical, and client-focused approach. AI offers a significant opportunity to enhance the efficiency and accessibility of legal services, but it must be tempered with a deep understanding of its capabilities and fallibilities. By adopting a collaborative model where AI supports human expertise rather than replacing it, legal professionals can leverage these powerful tools while safeguarding the integrity and trust inherent in the legal profession. For a more detailed overview of evaluating risk in the context of using Generative AI, PwC has put together a helpful report7 that further details types of activities and their current relative levels of risk given generative AI’s current capabilities.

As for what specific Generative AI solution is best for you, as researchers including the aforementioned Ethan Mollick have noted,8 the go-to for most individuals should be OpenAI’s GPT-4. However, specific contexts may demand a different solution as each solution has distinct nuances and differences that may make one aligned better to a specific task than another. For example, Claude might be a better choice for document-heavy work whereas Bard might be useful for some more basic tasks, though it is a less powerful model than GPT-4. Careful evaluation of risks, intended use case, and beneficiaries of the work you intend to perform will need to be weighed as you consider your usage of a generative AI solution and, among the available solutions, which one is best for you given the context.

ENDNOTES

  1. Jon Brodkin, Lawyers have real bad day in court after citing fake cases made up by ChatGPT, Artechnica (2023), https://arstechnica.com/tech-policy/2023/06/lawyers-have-real-bad-day-in-court-after-citing-fake-cases-made-up-by-chatgpt/.
  2. Mata v. Avianca, Case 1:22-cv-01461-PKC Document 54 (June 22, 2023).
  3. Thomas Germain, Google Says It’ll Scrape Everything You Post Online for AI, Gizmodo (July 3, 2023), https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486.
  4. Ethan Molick, The Best Available Human Standard, (Oct. 23, 2023), https://www.oneusefulthing.org/p/the-best-available-human-standard?utm_source=profile&utm_medium=reader2.
  5. David Baidoo-Anu and Leticia Owusu Ansah, Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning (Jan. 25, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337484.
  6. Holly Else, Abstracts written by ChatGPT fool scientists, Nature (Jan. 12, 2023), https://www.nature.com/articles/d41586-023-00056-7.
  7. PWC, Managing the risks of generative AI: A playbook for risk executives – beginning with governance (May 2023), https://explore.pwc.com/generativeai?_pfses=v73pNerFqz9yYFHRm44wW1uM.
  8. Ethan Molick, An Opinionated Guide to Which AI to Use: ChatGPT Anniversary Edition (Dec. 7, 2023), https://www.oneusefulthing.org/p/an-opinionated-guide-to-which-ai?utm_source=profile&utm_medium=reader2.

Colin S. Levy is a seasoned corporate lawyer, currently serving as Director of Legal and Evangelist for Malbek, a contract lifecycle management company and author of the recent book, The Legal Tech Ecosystem. Colin is also a noted legal tech advocate and speaker. His work bridges the gap between the tech world and the legal world.

Disclaimer: This article is for informational purposes only and does not constitute legal advice. Please consult with a duly qualified lawyer or legal professional for any specific legal questions or concerns.