X
October 2017 Ethically Speaking - Artificial Intelligence and Its Not-So-Artificial Legal Ethics Implications

by Scott B. Garner

In the Terminator movie franchise, Linda Hamilton’s character must fight to save the world from a future (that future being 2029, only twelve years from now) in which intelligent computers—including a Series 800 Model 101 Infiltrator played by Arnold Schwarzenegger—have taken over the world. The Terminator is just one of many post-apocalyptic movies and books in which the world’s and humanity’s demise is brought about by the rise of computers that we humans created.

Although not as dramatic, the real world application of artificial intelligence, or AI, in various industries and sectors has given rise to existential debates among supporters and detractors of AI alike. And while we may not be too close to a Series 800 Model 101 Infiltrator running amok through the streets of Los Angeles, AI has become a known player in many aspects of our lives—from driverless cars to unmanned drones. Not surprisingly, the legal industry too has entered the debate.

What exactly is artificial intelligence? One definition is “a computerized system that exhibits behavior that is commonly thought of as requiring intelligence.” See Nat’l Sci. & Tech. Council Comm. on Tech., Exec. Office of the President, Preparing for the Future of Artificial Intelligence, 6 (2016). Another definition is “a system capable of rationally solving complex problems or taking appropriate actions to achieve its goals in whatever real world circumstances it encounters.” Id. Yet another is that AI is “the ability of a machine to perform what normally can be done by the human mind . . . [using] automated computer-based means to process and analyze large amounts of data and reach rational conclusions—the same way the human mind does.” See W. Wen Yun Chang, What Are the Ethical Implications of Artificial Intelligence Use in Legal Practice?, 33 Law. Man. Prof. Conduct, 284 (Bloomberg BNA May 2017).

In the legal profession, AI includes programs that have been around for decades and can assist lawyers with certain tasks, including legal research (e.g., Lexis and Westlaw) and, more recently, document review (in the use of analytics and algorithms, including predictive coding, to cull large volumes of documents). There also has been a growing number of document generating programs, like LegalZoom, that some argue replace the need for lawyers to be part of certain legal processes. It is this group that has generated the most controversy, particularly from a legal ethics perspective. And, although the basic form document software programs offered by companies like LegalZoom may not be AI, as those programs become more sophisticated and offer more than mere document generation and completion, they may drift into providing AI and, more importantly, the provision of legal services. For that reason, the plethora of legal opinions analyzing LegalZoom can easily be read to apply to AI programs as well, thereby providing a glimpse into the future of how courts will apply the current law to AI legal services.

Proponents of the growing use of AI legal service providers point to the ever-growing justice gap in the United States, where so many individuals simply cannot afford to hire counsel. According to a World Justice Report, the United States ranks 47th out of 100 nations in access to civil justice—one of the lowest scores among developed countries. See E. Walters & J. Asjes, URLs or UPL? Using Software to Close the Access to Justice Gap, Strategic Intelligence for L. Firms, 77-100 (Laura Slater ed., Ark Group, 2016). Companies like LegalZoom seek to provide basic legal services to these otherwise unrepresented individuals by providing services and products, such as wills, bankruptcy petitions, family law forms, and basic contracts for a fixed fee. For many individuals, without these services, they would have no access to legal services at all. Of course, as we have learned through scandals involving, for example, notaries1 and loan modification operations,2 no legal service often is better than a bad legal service provided by someone not authorized to provide those legal services.

Legal services provided by AI programs, without direct input from a lawyer, raise a number of ethical concerns, but primary among them is the risk of unauthorized practice of law (“UPL”). The State Bar Act provides, “No person shall practice law in California unless the person is an active member of the State Bar.” Cal. Bus. & Prof. Code § 6125 (2017). It further provides that any person engaged in the unlawful practice of law is guilty of a misdemeanor punishable by a fine and/or imprisonment. Id. § 6126. Moreover, Rule of Professional Conduct 1-300 provides that a lawyer “shall not aid any person or entity in the unauthorized practice of law.” So are the owners and operators of companies that offer AI legal services engaged in the unauthorized practice of law or assisting others in the unauthorized practice of law?

In a decision still cited today that was written long before AI was a reality, the California Supreme Court defined the practice of law to include “legal advice and counsel and the preparation of legal instruments and contracts.” Baron v. City of Los Angeles, 2 Cal. 3d 535, 542 (1970). The Supreme Court further stated that doubts generally will be resolved in favor of a finding that an activity constitutes the practice of law. Id. at 543. Thus, where a company advertises that it will prepare a will or a contract for a customer, Baron would seem to indicate that that service would constitute the unlawful and unauthorized practice of law.

In 2007, the Ninth Circuit applied California law to determine that a software company advertising itself as a bankruptcy petition preparer was engaged in the unauthorized practice of law. In re Reynoso, 477 F.3d 1117 (9th Cir. 2007). As described by the court, the company held itself out as offering legal expertise, advertising on its website that it offered its customers extensive advice on how to take advantage of loopholes in the Bankruptcy Code. It also promised services comparable to those of a “top-notch bankruptcy lawyer,” and described its software as an “expert system” that “knows the law” and would do more than function as a “customized word processor[].” Id. at 1125. Thus, the company’s own description of its services made it clear that it was purporting to provide services that normally would be provided by a lawyer and, thus, constituted the practice of law.

Other jurisdictions have had similar experiences addressing non-lawyer based legal services, and courts in those jurisdictions have relied on many of the same factors on which the In re Reynoso court relied. Interestingly, court rulings on these issues have not always been the end of the analysis, as a number of states have looked for political or legislative solutions to salvage what they could of these services.

One of the first significant forms software UPL decisions came out of the Fifth Circuit in the case of Unauthorized Practice of L. Comm. v. Parsons Tech., Inc., 179 F.3d 956 (5th Cir. 1999). The Texas law at issue, Texas Govt. Code Ann. § 81.101, provided that the practice of law included “the giving of advice or the rendering of any service requiring the use of legal skill or knowledge, such as preparing a will, contract or other instrument, the legal effect of which under the facts and conclusions involved must be carefully determined.” Tex. Gov’t Code Ann. § 81.101(a) (2017). The court found that Parsons Technology’s “Quicken Family Law” software program constituted the unauthorized practice of law. Parsons Tech., 179 F.3d at 956. Following the court’s decision, the Texas Legislature amended the relevant statute to state that the practice of law does not include the “design, creation, publication, distribution, display, or sale . . .[of] computer software, or similar products if the products clearly and conspicuously state that the products are not a substitute for the advice of an attorney.” § 81.101(c). Under that rationale, an AI company in Texas seemingly could practice law all it wants as long as it expressly says that it is not a substitute for an actual lawyer practicing law.

LegalZoom—perhaps the most well-known of the forms software providers—has been a party to a number of cases addressing whether its services constitute the unauthorized practice of law. In North Carolina, for example, following a consent judgment entered into between LegalZoom and the North Carolina State Bar, the parties agreed that the definition of “practice of law” does not encompass the operation of a website that offers consumers access to interactive software that generates a legal document based on the consumer’s answers to questions presented by the software.

In South Carolina, LegalZoom agreed to a settlement that included the following terms: (1) the forms offered by LegalZoom would be either the same as those self-help forms promulgated by South Carolina state and local government agencies or courts, or reviewed and approved by a licensed attorney before being offered for sale; (2) customers’ answers to questionnaires would be entered verbatim in the self-help form template; and (3) LegalZoom would include a statement on the website to the effect that “LegalZoom is not a law firm or a substitute for an attorney or law firm.” Medlock v. LegalZoom.com, Inc., 2013 S.C. LEXIS 362, *7-*8 (Oct. 25, 2013).

In Missouri, LegalZoom did not fare so well. There a district court found that LegalZoom’s products constituted the unauthorized practice of law. Janson v. LegalZoom.com, Inc., 802 F. Supp. 2d 1053 (W.D. Mo. 2011). Specifically, the court found that LegalZoom’s product was more than just the sale of blank forms and do-it-yourself kits to facilitate the consumer’s own preparation of legal documents. To the contrary, the portal boasted, “Just answer a few simple online questions and LegalZoom takes over.” Id. at 1055.

The takeaway from these cases as applied to AI is that a non-lawyer based legal services provider probably could escape an adverse UPL finding if (1) the consumer inputs all information herself, and that information is not changed by the provider; (2) the provider includes a statement on the website to the effect that it is not a substitute for a lawyer; and (3) the forms themselves are reviewed by a lawyer before being sold to consumers. Services that likely will run afoul of UPL statutes are those in which a consumer is asked some basic questions, after which the provider takes over and analyzes the information provided by the consumer—all without the input of a lawyer.

Despite the many restrictions on companies who seek to provide through automation something akin to legal services to their customers, AI technology advances can provide opportunities to lawyers. Lawyers, for their part, can continue to provide legal services, while utilizing the ever-increasing array of AI and other technology tools to improve their service and efficiency. As long as the lawyer is in charge and is not merely allowing a software program to interface with the client without lawyer input or supervision, he or she should be safe from the threat of UPL. For example, in the e-discovery context, a document review program, like Relativity, helps formulate and execute searches, but the law still holds the lawyer responsible for the review. Or think of a legal research tool like Westlaw, where the program runs intelligent searches, but still is at the beck and call of the attorney running the program. While there may come a time when lawyer robots can provide full legal services in place of human lawyers (imagine the horror film James Cameron could direct with that as its theme!), we are not there yet. And, absent significant and fundamental changes in UPL law, that scenario likely will not occur even if the technology otherwise could get us there. Thus, for the foreseeable future, lawyers remain necessary and relevant.

UPL, of course, is not the only issue at stake for AI providers and the lawyers who use AI services. When a lawyer uses a tool like Relativity or Westlaw, or any of the many software or web-based platforms available, she may be safe from a claim that she assisted the violation of UPL, but still will have other ethical obligations, including the obligation to oversee the search or investigation and, ultimately, responsibility for the results. Even a tool like the one struck down by the Missouri court in Janson, where the computer “analyzes” the answers to a consumer’s questions, likely would be acceptable if a lawyer were reviewing the analysis provided by the computer program and ultimately taking ownership of that analysis. Indeed, it would not be different in kind from a lawyer hiring a law student to perform legal research. Although that law student may not be licensed to practice law, the lawyer overseeing his work is (or better be), and, thus, the lawyer ultimately is responsible for the product delivered to the client. In that sense, the use of AI is conceptually no different from the use of unlicensed humans. As long as a lawyer is supervising and taking responsibility for the work, there should be no UPL problem. See, e.g., ABA Formal Opn. 08-451 (“A lawyer may outsource legal or nonlegal support services provided the lawyer remains ultimately responsible for rendering competent legal services to the client under Model Rule 1.1.”); Orange County Bar Ass’n Formal Opn. No. 2014-1 (citing Winterrowd v. Am. Gen’l Annuity Ins., 556 F.3d 815 (9th Cir. 2009), and noting that a lawyer not admitted in California is not engaged in UPL if he is supervised by a California lawyer).3

Of course, to the extent AI legal service providers, or form software providers like LegalZoom, are restricted to providing only the most basic functions, and are not performing as a substitute for a lawyer, that may not be an effective or satisfying solution to the legal services gap that leaves so many without access to a lawyer or legal services. Similarly, to the extent a consumer must pay for a lawyer to analyze and supervise a computer program’s output, that added cost may be prohibitively expensive and, thus, may defeat the purpose of the AI solution in the first place. But the reality is that state legislatures and state bars must walk a fine line between expanding access to legal services, while at the same time protecting the public from obtaining legal advice from humans – and non-humans – that may not be qualified to provide such advice.

ENDNOTES

  1. See, e.g., ABA Comm’n on Immigr., Fight Notario Fraud (updated June 2017), https://www.americanbar.org/groups/public_services/immigration/projects_initiatives/fightnotariofraud.html (discussing problems of fraud by “unscrupulous notaries” or “immigration consultants”).
  2. See, e.g., Comm. on Prof. Resp. & Conduct, Legal Services to Distressed Homeowners and Foreclosure Consultants on Loan Modifications, Ethics Hotliner, Spring 2009, at 1, https://www.calbar.ca.gov/Portals/0/documents/ethics/Publications/EthicsHotliner/ Ethics_Hotliner-Loan_Modifications- Spring_09.pdf (discussing risks to home owners of using “foreclosure consultants”).

 

Scott B. Garner is a partner at Umberg/Zipser LLP in Irvine, California. His practice focuses on complex business litigation, with a particular emphasis on legal malpractice defense and legal ethics. He currently serves as an officer (Secretary) of the OCBA, as Co-Chair of the OCBA Professionalism & Ethics Committee, and as Co-Chair of the OCBA Civility Task Force. From 2010-2016, Scott served on the California State Bar’s Committee on Professional Responsibility and Conduct, serving as Chair in 2014-15. Scott can be reached at sgarner@umbergzipser.com.

Return