by Samantha E. Dorey and James C. Truxaw
Research has shown that people generally have difficulty distinguishing AI-generated texts from human generated texts. For instance, experimental studies have shown that both novice and experienced teachers could not identify texts generated by ChatGPT among student written texts.1 Similarly, research published in the ScienceDirect journal, Research Methods in Applied Linguistics, revealed that linguistics experts were only able to distinguish between AI written and human written abstracts less than 39% of the time.2
Distinguishing text generated by AI from text generated by humans is nearly impossible because AI is exceptionally adept at mimicking human writing.3 This is largely due to the fact that it is trained on vast datasets that contain diverse examples of human writing.4 AI-generated text can exhibit the same coherence that human generated text does because AI is able to integrate varied information and maintain a clear, logical structure when generating a text. Additionally, AI can even simulate human errors like typos or colloquialisms.5
In light of the foregoing, it is no surprise that when AI itself was prompted, “What is the best way to determine whether a text was produced by an AI or a human?” it responded: “Ultimately, the best way to determine whether a piece of writing was produced by a human or a machine is to ask the person who wrote it.”6
It is also exceptionally difficult to determine whether a human used AI as a research tool. This is largely because AI-generated responses to research questions are usually heavily edited by the human writer as they are integrated into a final written product. Accordingly, traditional AI detection methods that rely on analysis of the final written product are of little help in determining whether AI was used in the research stage.
AI Can Mislead and Deceive AI outputs can be misleading or deceptive because sometimes they are based on incomplete or inaccurate data. As one article wryly puts it, “[i]n the world of AI, the old adage ‘garbage in, garbage out’ holds true.”7 AI outputs can also be misleading or deceptive because AI struggles with context. AI lacks the ability to understand the meaning behind a dataset.8 For instance, AI can recognize a statistical association between the frequency of umbrellas sold and rainfall patterns, yet it cannot understand that people buy umbrellas in rainy weather to stay dry.
AI outputs can also be misleading or deceptive in the legal research and writing context. A study by Stanford RegLab and Institute for Human-Centered AI found that general purpose large language models like ChatGPT, PaLM, Clause, and Llama often generate misleading or deceptive content.9 More specifically, this study found that these large language models were more likely to produce misleading or deceptive outputs when they were prompted to handle tasks requiring a nuanced understanding of legal issues or specific caselaw10 or when they were prompted to analyze caselaw from lower courts.11
Further, another study from Stanford RegLab and Institute for Human-Centered AI found that even the legal large language models like Lexis+ AI and Ask Practical Law AI systems produced misleading or deceiving outputs 17%-34% of the time.12 The study attributed this to the fact that large language models generally struggle with finding fitting authority, and have a tendency to agree with the user’s incorrect assumptions.13 Moreover, the same study found that these instances of mistake or deception were often so subtle that the user normally would be unable to detect them. For instance, the study found that these models sometimes construed dicta or language in dissents or concurrences of caselaw as binding authority.14
Misleading or deceptive legal information can have a profound negative impact on litigation. A legal document containing inaccuracies can mislead judges, juries, and legal professionals. It can cause disputes that result in unnecessarily incurred costs and delays in justice. Moreover, a misinterpretation of case law or an erroneous presentation of the facts could sway the outcome of a trial or settlement negotiation.
These mistakes will not necessarily be caught by judges or their clerks. Generally, judges and clerks face heavy caseloads. They have limited time and resources and are unable to thoroughly scrutinize every detail of the legal documents they receive. Since it is almost impossible for humans to distinguish between AI-generated text and human-generated text, disclosing the use of AI would help judges, clerks, and other attorneys catch misleading or deceptive information. Disclosure would invite scrutiny to increase the likelihood that AI mistakes or inaccuracies would be identified and corrected. In this way, disclosure would uphold the integrity and reliability of legal proceedings.
Disclosure Promotes Ethical Accountability When a California attorney decides whether to use AI in her practice, the decision should be informed by the attorney’s ethical Duties of Confidentiality, Competence & Diligence, Legal Compliance, Supervision, Communication, and Candor to the Tribunal.15
To comply with her Duty of Confidentiality, an attorney using AI needs to ensure that all client information input into AI systems is anonymized and that the AI platform complies with stringent security protocols to safeguard against unauthorized access or disclosure.16 To comply with her Duty of Competence & Diligence, an attorney using AI needs to understand the technology’s capabilities, limitations, and terms of use. She must also critically assess AI-generated outputs for accuracy and bias, ensuring that they align with client interests and legal standards. Additionally, she should validate and supplement AI-generated content with traditional legal analysis.17 To satisfy her Duty to Comply with the Law, an attorney using AI needs to ensure strict adherence to all relevant laws and regulations governing AI use.18 To comply with her Duty of Supervision, an attorney using AI needs to ensure that clear policies are established regarding the appropriate and ethical use of AI within the firm.19 To comply with her Duty of Communication, an attorney should consider disclosing her intention to use AI to her client. Finally, to maintain her Duty of Candor to the Tribunal, an attorney must thoroughly review all generative AI outputs for accuracy and correct any errors or misleading information. She must also check jurisdictional rules for any requirements regarding disclosure of AI use to ensure compliance and transparency.
The adoption of policies requiring mandatory disclosure of the use of the AI would help ensure that an attorney’s behavior conforms with all of the foregoing duties. An attorney cannot be held accountable for her AI use if no one knows of it.
Disclosure Requirements Across Jurisdictions The authors of this article cannot take credit for creating the idea of mandatory disclosure regarding the use of AI. The following courts have already implemented specific AI disclosure requirements:
That numerous judges have adopted disclosure requirements suggests that there is a recognized need for a general duty to disclose the use of AI in legal writing and research.
Downsides of Disclosure While mandatory disclosure of AI use in legal writing and research certainly promotes transparency and accountability, it also comes with potential drawbacks that merit consideration. One significant concern is the potential chilling effect it may have on the adoption and utilization of AI technologies in the legal profession. Mandating disclosure could inadvertently deter attorneys from leveraging AI tools who may fear increased scrutiny or skepticism from courts, opposing parties, or even clients. This reluctance could impede the legal profession’s ability to benefit from advancements in technology that could improve efficiency and accuracy in legal research and drafting.
Implementing disclosure requirements also raises practical challenges in enforcement and verification. Courts may lack the resources to effectively monitor compliance. This may result in disparities in how disclosures are evaluated and enforced. Moreover, courts that enforce such a policy are faced with the daunting task of distinguishing between text generated by AI from text generated by humans, which can be nearly impossible. Additionally, compliance with disclosure requirements could impose administrative burdens and costs, especially for smaller law firms and solo practitioners who may lack resources to invest in AI technologies or comprehensive compliance measures.
Conclusion DISCLOSURE: This conclusion was generated by the latest free version of ChatGPT by posing the following prompt: “write a short conclusion for an article that discusses the pros and cons of mandatory disclosure of the use of AI in drafting and researching legal documents.” To highlight AI’s drafting capability, the following conclusion is offered here, unedited:
“In conclusion, the debate over mandatory disclosure of AI use in legal writing and research reveals a nuanced landscape with both benefits and challenges. Proponents argue that transparency enhances accountability and ensures the integrity of legal proceedings, crucial in an era where AI’s capabilities to mimic human writing pose significant challenges. However, concerns about stifling innovation, uneven standards across jurisdictions, and the potential for misconceptions about AI’s limitations highlight the complexities involved. Striking a balance is essential: while promoting transparency through refined disclosure requirements, it’s equally vital to foster an environment that encourages responsible AI use and supports ongoing technological advancements in legal practice. Educating stakeholders and refining regulatory frameworks will be pivotal in navigating these dual imperatives, ensuring that AI enhances rather than compromises the integrity and efficiency of legal services.”
ENDNOTES
Samantha E. Dorey is a founding partner at Everett Dorey LLP and serves as trial counsel in high-impact cases in state and federal court. Everett Dorey is a full-service civil litigation firm that represents multi-national corporations, non-profits, cities, public agencies, and school districts. Ms. Dorey can be reached at sdorey@everettdorey.com. James C. Truxaw is a partner at Everett Dorey LLP whose practice is focused on general liability, employment law, and toxic tort matters. Mr. Truxaw can be reached at jtruxaw@everettdorey.com.