X
March 2023 Ethically Speaking - The Ethics of Artificial Intelligence in Legal Practice

By Jason Moberly Caruso

If you are worried about artificial intelligence (AI) invading the practice of law someday, fear no longer: our robot friends are already a core part of virtually every practice, and this will only increase in the coming years. While AI will by no means replace lawyers entirely, what is surprising is how law firms do not recognize the extent to which AI has arrived: per one study, more than 90% of respondents said “no” or “do not know” when asked if their law firms had provided information about their use of AI. Thompson Hine, The Innovation Gap Persists (Dec. 2020), https://admin.thompsonhine.com/wp-content/uploads/1135/doc/TH_InnovationGapPersists_2020.pdf.

AI comes in many shapes and sizes, from the splashy (an AI “lawyer” remotely coaching clients in the courtroom via earpiece), to drudgery reducing (document review platforms that can automatically sort documents based on certain criteria and mark documents as privileged), to the mundane (the predictive text feature on your smartphone that suggests the next word in your sentence). Mission critical or not, attorneys should understand and consider the ethics of their use of AI.

AI Trial Lawyer?
The legal technology company DoNotPay made headlines recently with its plans to send a volunteer defendant wearing an earpiece to a California courtroom in order to contest a traffic citation. DoNotPay’s AI “lawyer” would monitor the proceedings and purportedly suggest responses to the defendant in real time. The company smartly did not identify the name of the defendant, or the court or county in which the appearance would take place.

DoNotPay’s plan set off a firestorm of think-pieces and raised many questions regarding the legality of the arrangement. At the most basic, since the system would need to broadcast the proceedings in order to analyze them and then beam back responses, wouldn’t that violate the rule against recording judicial proceedings? Cal. Rules of Court, rule 1.150(c). DoNotPay also did not even pretend to comply with restrictions on the for-profit corporate practice of law. Cal. Corp. Code § 13041, subd. (b); Cal. Bus. & Prof. Code §§ 6160 (registration of corporation with State Bar), 6165 (all directors, shareholders, and officers must be licensed to practice law). Accordingly, if DoNotPay actually performed its experiment, it arguably would have committed a misdemeanor. Cal. Bus. & Prof. Code §§ 6125, 6126.

Fortunately or unfortunately, DoNotPay’s experiment was called off: its CEO reported that “State Bar prosecutors” threatened him with imprisonment if the company followed through with its plan. DoNotPay chose to delay the experiment, leaving unanswered the question of how an AI lawyer might perform in a courtroom, and what the actual consequences of that performance would have been.

AI Document Review
There has been a veritable explosion of the use of AI in litigation discovery—particularly concerning review of large electronically stored information (ESI) databases for potential production. Review processes that used to require a phalanx of lawyers may now properly be delegated to technology assisted review (TAR) systems. Rather than physically review each document, TAR systems are essentially instructed in which types of documents are responsive, and the systems will decide whether they should be produced. Such processes must be validated in order to pass muster with the court, which requires a thorough understanding of their operation, as well as transparency and cooperation among counsel. See In re Diisocyanates Antitrust Litig., No. MC 18-1001, 2021 WL 4295729, at *6-*7 (W.D. Pa. Aug. 23, 2021).

Attorneys managing these processes must take reasonable steps to ensure that they do not result in the inadvertent production of privileged material. Given the scale of document production in some contemporary litigation, some erroneous production of privileged documents is inevitable. See U.S. Commodity Futures Trading Comm’n v. Parnon Energy Inc., No. 11–CV–3543, 2014 WL 2116147, at *4 (S.D.N.Y. May 14, 2014). However, parties must take reasonable steps to prevent such disclosures, in order to avoid waiver. Fed. R. of Evid., Rule 502(b); Regents of Univ. of Cal. v. Superior Court, 165 Cal. App. 4th 672, 681-82 (2008).

AI Is Only as Intelligent as It Is Designed to Be
Proponents of AI herald it as providing for dispassionate, data-driven assessments, free from human frailties such as emotion, prejudice, and deception. However, one must remember algorithms are not intrinsically fair or benevolent: at its core, AI is designed by people, and it can carry the implicit (or explicit) biases of its designers or the datasets upon which it relies. Examples abound, from Google’s Translate software converting phrases from “she said” in the subject language to “he said” in the target language, to facial recognition software being unable to differentiate Asian subjects. See James Zou & Londa Schiebinger, Comment, Nature 559, p. 324 (2018), https://www.nature.com/articles/d41586-018-05707-8.

Systems used for high-stakes legal decisions are not immune from this phenomenon. In 2016, it was reported that a recidivism assessment program, the Correctional Offender Management Profiling for Alternative Sanctions, used in courts across the United States, would identify Black defendants as likely to reoffend at twice the rate of similarly situated white defendants. ProPublica, Machine Bias (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. The disparity could not be explained by other factors: statistical testing to isolate the effect of race from criminal history, age, and gender showed “[B]lack defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.” Id.

AI in other law enforcement settings can also founder on the same shoals. Statistical analysis showed that PredPol, a program used by police departments to predict future crime “hotspots” so that resources could be deployed to those areas in advance, had a tendency to get stuck in a feedback loop of over-policing minority neighborhoods. Maurice Chammah, Policing the Future, The Marshall Project, (Feb. 3, 2016), https://www.themarshallproject.org/2016/02/03/policing-the-future. This apparently was not the product of explicit racial profiling, but rather the system “learning” from a purportedly race-neutral dataset: historical crime reports. The insidiousness of such bias is that it is implicit and may be as invisible to an AI system as it may be to its human creator. People v. Bryant, 40 Cal. App. 5th 525, 545 (2019).

Direct regulation of the legal profession has begun to address this issue. Beginning January 1, 2022, State Bar of California Rule 2.72 requires that attorneys receive implicit bias training every three years. Also effective January 1, 2022, Code of Civil Procedure section 231.7 requires that the factfinder consider conscious and unconscious bias (including implicit bias) in the use of peremptory challenges in criminal cases. The California Rules of Professional Conduct expressly prohibit unlawful discrimination against any persons in the course of representing a client and in the operation of a law firm. Cal. Rules of Prof’l Conduct R. 8.4, 1(a), (b). Accordingly, it is incumbent on attorneys as both advocates and operators of law firms to consider the extent to which facially neutral technologies and processes encode implicit bias and prejudice.

Conclusion
Per Stephen Hawking, “AI is likely to be either the best or worst thing to happen to humanity.” While powerful, AI is but a tool and extension of its creators and users. How that tool is built and used dictates whether the result is for good or ill.

Jason Moberly Caruso is a partner with Newmeyer Dillion in Newport Beach, California, where he specializes in complex environmental and land use matters. He has been certified as a specialist in Appellate Law by the California Board of Legal Specialization. Mr. Caruso is a member and the secretary of the OCBA’s Professionalism and Ethics Committee, and can be reached at jason.caruso@ndlf.com. The views expressed herein are his own. He would like to thank Newmeyer Dillion associate Madison Rolapp who conducted research for and contributed to this article.