X
October 2017 Cover Story - Artificial Intelligence and the Law: Navigating “Known Unknowns”

by James S. Azadian and Garrett M. Fahy

There are known knowns. There are things we know we know. . . . There are known unknowns . . . we know there are some things we do not know. But there are also unknown unknowns. The ones we don’t know we don’t know.
~ Former Secretary of Defense, Donald Rumsfeld (2002).

While former Secretary of Defense Rumsfeld was not speaking about artificial intelligence (AI), the idea of “known unknowns” aptly captures the relationship between AI and the law: There are many things we know we do not know about how AI will impact the law, and the legal profession must recognize these known unknowns so it can meaningfully respond and protect our most cherished rights.

Most of us rely on AI without knowing it (the check you just deposited with your mobile banking app, the filter you put on your newest Instagram post). Supporters claim AI’s promises are limitless: It will correctly diagnose our diseases, ensure our medications don’t conflict, drive our cars for us and regulate our traffic lights to minimize traffic, choose our investments, and even decide who is entitled to public benefits.

Opponents suggest its liabilities are equally limitless: AI will misdiagnose diseases, confuse our medications and poison us, expose our most sensitive information, crash our cars and incentivize drivers to run the red light to beat the yellow light, and bring on the next Great Recession.

AI is, depending on one’s viewpoint, the next best thing, or one of the gravest threats to liberty, privacy, and life itself. When there are tech titans on both sides of the debate, sufficient evidence to amaze and terrify us, and technology advancing at warp speed, how should the legal profession—traditionally driven by caution, probity and risk-mitigation—understand and harness AI’s benefits for the good of our clients, whoever they may be?

And even if we can discern AI’s benefits and burdens and advise our clients accordingly, is AI inherently unstable, warranting a moratorium until it is better harnessed? Its supporters claim its vast, untapped potential requires increased private and public investment and limited regulation. Its opponents question its potential, and have called for greater transparency, plenty of regulation, and skepticism. The recent debate between Elon Musk and Mark Zuckerberg highlights some of the key sticking points.

Musk, the founder of SpaceX and Tesla, and widely regarded as a visionary entrepreneur, argues artificial intelligence poses the greatest threat to humanity. When asked by a Facebook user about Musk’s bold prediction, Zuckerberg, the Facebook pioneer and chief, presented a more optimistic view of artificial intelligence, focusing on the ways AI will improve the quality of life. In a Facebook Live posting from his backyard on Sunday, July 23, 2017, Zuckerberg labeled as irresponsible the “naysayers” who “drum up . . . doomsday scenarios”—presumably a reference to Musk and other thought leaders and tech luminaries, such as Bill Gates and Stephen Hawking, who have warned that AI could result in tragic and unforeseen consequences. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.” Rory Cellan-Jones, Stephen Hawking Warns Artificial Intelligence Could End Mankind, BBC News (Dec. 2, 2014), http://www.bbc.com/news/technology-30290540.

Ironically, only days after Zuckerberg criticized AI naysayers, AI researchers at Facebook reportedly shut down one of its AI systems because “things got out of hand” with a research experiment. The problem? Facebook’s AI chatbots, which were negotiating with each other in English, developed their own communication style, which caused Facebook’s researchers to pull the plug on the experiment. Although the bots used English words, it could not be understood by humans. The shutdown caused some media outlets to misleadingly report that the bots “invented their own language.” In response, Dhruv Batra of Facebook’s Artificial Intelligence Research Group, wrote in a July 31, 2017 Facebook post, “While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.” Dhruv Batra, Facebook (July 31, 2017) https://www.facebook.com/dhruv.batra.dbatra/posts/1943791229195215?pnref=story. Mr. Batra is right: The idea of AI agents inventing their own language is alarming and unexpected, but this and other AI developments bring into sharp relief the legal profession’s task in understanding and addressing the challenges posed by such technology.

Legal figures, including California Supreme Court Justice Mariano-Florentino Cuéllar, have entered the discussion on Big Data and AI, addressing in particular their potential impacts on the legal profession. Earlier this summer, Justice Cuéllar spoke to both the Appellate Law Section of the Orange County Bar Association as well as to judges and lawyers attending the annual Ninth Circuit Judicial Conference about Big Data and AI. Justice Cuéllar also taught a class at Harvard Law School entitled, “Frontiers of Cyberlaw: Artificial Intelligence, Automation and Information Security.” (We have come a long way from the basics of torts, contracts, and real property!)

Although the robot apocalypse may not be upon us (yet), the rapid changes in technology, and its adoption by industry and government, portends mammoth changes in the legal landscape, and raises vital questions that must be answered to preserve our ordered liberty. It is important that we consider four key “known unknowns” at the intersection of AI and the law: (1) Regulation of AI; (2) AI and Due Process Rights; (3) AI and e-Discovery; and (4) the concern over “data discrimination.”

AI Regulation: Proactive or Responsive?

The first known unknown is the extent of state or federal regulation of AI. That is to say, we assume some regulation of AI is called for, but just what that looks like, who will lead it, and what it will mean for the legal profession are largely unknown. While the current intellectual property legal regime is in place (and courts are weighing whether AI technology is eligible for patent protection), there is no national AI registry, no specific office within the Patent and Trademark Office to police AI-specific technologies or their application, and no specific laws or regulations dealing with AI technology. Are such an office and specific laws/regulations required? The Copyright Office decided it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author” (see U.S. Copyright Office, The Compendium of U.S. Copyright Office Practices § 306 (3d ed. 2014)), but what about policing the production of such works “produced by a machine or mere mechanical process?” Id. at § 313.2.

In October 2016, the Obama Administration generated a report entitled, “Preparing for the Future of Artificial Intelligence,” issued a “National Artificial Intelligence Research and Development Strategic Plan,” and hosted public workshops seeking the public’s input on AI developments. Ed Felten & Terah Lyons, The Administration’s Report on the Future of Artificial Intelligence, Obama White House (Oct. 12, 2016), https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence. Whether the current administration engages on this issue remains to be seen.

The House of Representatives has begun to address AI issues. On May 24, 2017, Congressman John K. Delaney (MD-6) announced the launch of the bipartisan Artificial Intelligence Caucus for the 115th Congress. See Press Release, John K. Delaney, Delaney Launches Bipartisan Artificial Intelligence (AI) Caucus for 115th Congress, (May 24, 2017) https://delaney.house.gov/news/press-releases/delaney-launches-bipartisan-artificial-intelligence-ai-caucus-for-115th-congress. The goal of the caucus is to “inform policymakers of the technological, economic, and social impacts of advances in AI and to ensure that rapid innovation in AI and related fields benefits Americans as fully as possible.” Id. The “AI Caucus will bring together experts from academia, government, and the private sector to discuss the latest technologies and the implications and opportunities created by these new changes.” Id. Absent from the list, and perhaps unintentionally so, was specific reference to lawyers and judges.

Mr. Musk, an expert in navigating government regulations in high-tech industries, argues regulation is usually responsive, but:

AI is a rare case where I think there should be proactive regulation instead of reactive. I think by the time we are reactive in AI regulation, it’s too late. Normally, the way regulations are set up is that a whole bunch of bad things happen, there is public outcry, and then after many years, the regulatory agencies are set up to regulate that industry.”

James Titcomb, AI is the biggest risk we face as a civilization, Elon Musk says, The Telegraph (July 17, 2017), http://www.telegraph.co.uk/technology/2017/07/17/ai-biggest-risk-face-civilisation-elon-musk-says/. Is he right? Mr. Zuckerberg may disagree. But how should lawyers counsel clients who are developing AI technologies and pushing the proverbial technological envelope?

In speaking to the Orange County Appellate Law Section, Justice Cuéllar sensibly expressed concern that technology companies will resist regulation, seeking to block rules that would limit how AI is used before courts and legislatures have a chance to wrestle with the issue. That is a good point.

On the other hand, AI is always evolving, so how would courts and legislatures regulate what is always changing? And, given that AI is inherently automated and artificial, how would courts and legislatures hold automated technology responsible?

The status quo appears to favor the bold, and until states or the federal government make a concerted regulatory effort, private companies will be, to a large extent, self-regulating, like Facebook, and we will be depending on their good judgment. That may seem like a good thing, but whether that is in the public’s best interest is a different inquiry.

Several states, including California, have passed laws regulating autonomous vehicles, and many other states are considering regulating this burgeoning industry. See, e.g. , Jessica S. Brodsky, Autonomous Vehicle Regulation: How an Uncertain Legal Landscape May Hit the Brakes on Self-Driving Cars, 31 Berkeley Tech. L.J. 851 (2016). Across the pond, the British have begun wrestling with these issues, and may provide a path to follow. As stated by a recent House of Commons Science and Technology Committee, “While it is too soon to set down sector-wide regulations for this nascent field [of AI], it is vital that careful scrutiny of the ethical, legal, and societal dimensions of artificially intelligent systems begins now.” Select Committee on Science and Technology, “Conclusions and recommendations,” 2016, https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14509.htm (UK).

Across the English Channel, the European Union Parliament passed a resolution in February of 2017 with recommendations to the Commission on Civil Law Rules on Robotics, proposing liability rules on AI and robotics, and requesting a code of ethical conduct be drafted. A February 2017 report by the EU notes:

Whereas now that humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence (AI) seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider its legal and ethical implications and effects, without stifling innovation.

European Parliament Resolution of 16 February 2017 With Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), Eur. Parl. Doc. P8 TA 0051 (2017), http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2017-0051+0+DOC+PDF+V0//EN. One recommendation may be that our legislators should be monitoring these developments with an eye toward incorporating the best practices here at home.

Robo-Cops and Automated Decisions: AI and Due Process Rights

The second known unknown is the extent to which civil and criminal courts and administrative bodies will rely on AI technologies in decision-making processes, and what that means for those who challenge those determinations. We know technology can aid in weighing data, but what about challenges to those results? Who is ultimately responsible when a decision is rendered relying on AI technology?

If law enforcement or a prosecutor relies on AI technology that determined a suspect’s criminal motive based on consideration of data points from a suspect’s background, is the prosecutor responsible for the charging decision, or is the inventor of the technology? The holder of the rights to the technology itself? These weighty questions are in search of answers.

At the recent Ninth Circuit Judicial Conference, Justice Cuéllar referenced robots circling the conference room and contemplated the hypothetical scenario where a robot conducts a search and seizure, and a judge is required to determine whether the stop was reasonable under the Fourth Amendment. What would guide the judge’s decision-making? As Justice Cuéllar remarked, we humans tend to give AI systems more deference than we do to human-created analysis.

Privacy and civil rights advocates have raised legitimate concerns about deference to automated processes, particularly where due process rights are implicated. Recall the recent debates over the constitutionality of municipal traffic cameras and automated traffic tickets here in California: Does issuing a ticket violate the bedrock principle of presumed innocent until proven guilty? Does expanded use of AI and its analytical capabilities necessarily encroach upon precious civil liberties?

The Fourth Amendment to the Constitution, ratified in 1791, guarantees a reasonable expectation of privacy. But as we surrender more of our personal information—through smart phone apps, online forms, and social media consumption—in exchange for greater reliance on AI, what exactly is a reasonable expectation of privacy? Is our technology outpacing our constitutional rights? If police departments increasingly rely on AI to guide police practices, how will defense counsel, prosecutors, and the courts weigh these decisions? No one knows, but the early adopters of AI technology may reap the rewards, leaving everyone else playing catch up. But can the legal profession countenance such a scenario when fundamental rights are at stake?

One solution may be the adoption and incorporation, on an industrywide or statewide basis, of uniform guidelines for AI technologies, and a uniform method for challenging AI-based decisions. The difficulty there will be to ensure that such guidelines do not inappropriately intrude on or hinder the creative genius birthing AI technologies. Uniformity and predictability are essential characteristics of a free and fair judicial system, and the present absence of these standards for AI counsels in favor of some guidance.

E-Discovery: Promises and Pitfalls

Third, while AI has been integrated into the civil litigation discovery process, the full extent of the integration and the attendant risks, benefits and costs are unknown.

It is no secret that the cost of litigation continues to soar and much of that is driven by the expense of conducting discovery. Where relevant information used to exist only in paper form, it was relatively easier to limit the realm of documents and things to review. Now, where documents and things exist, to a great extent, in the digital world, it is much more difficult to draw such a clear line. Consequently, the time and expense that go into locating and combing through thousands of pages of electronic documents (and their encryptions) both for production and review have increased substantially, for lawyers and clients.

Cue AI.

Notwithstanding existential concerns, it is true that AI has been beneficial for clients and attorneys in the discovery stage. Countless tech firms now offer a variety of e-discovery software that does everything from deleting duplicate documents to flagging relevant documents and even offering automated case evaluations. Indeed, many law firms have already outsourced or automated their document review processes. See, e.g., Digital WarRoom Pro Software, http://www.digitalwarroom.com/products/digital-warroom-pro/. With the increased sensitivity to litigation costs and the increased realm of electronic data subject to discovery, AI has helped lawyers become more efficient and effective in the document-review process.

However, depending on technology for this critical step in the litigation process requires having essentially unfettered confidence in the reliability and infallibility of the technology. Software is not infallible. Additionally, while the benefits of AI, for now, are complimentary to a lawyer’s skills, it is certainly possible, and likely probable, that more and more tasks that traditionally required a lawyer’s skills and wisdom, such as legal research and analytical memoranda drafting, might be replaced by AI, raising the question of who is ultimately responsible for professional negligence—the lawyer who employed the AI or the software designer of the AI technology?

Some law firms—including the authors’—have begun grappling with emerging technology like AI, and have employed AI-based technologies in electronic discovery. Other law firms have taken different steps. According to a recent New York Times article, the Dentons law firm in 2015 started an initiative called “Nextlaw Labs” to monitor and invest in legal technology, while Baker McKenzie formed a committee to monitor legal technologies and chart the firm’s strategy. Steve Lohr, A.I. Is Doing Legal Work. but It Won’t Replace Lawyers, Yet, N.Y. Times (March 19, 2017) https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html. The article notes that emerging technologies can yield efficiency gains in tasks such as discovery (read: document review), legal research, and even contract review. But with the technological breakthroughs come attendant costs and risks, including new malpractice concerns.

For their part, judges seem be less than confident in the use of technology in discovery. Exterro, an e-discovery and litigation software company, recently conducted a survey of twenty-two federal judges regarding discovery and technology. Exterro, 2nd Annual Federal Judges Survey, Views From Both Sides of the Bench, https://www.exterro.com/judges-survey-16/. The judges surveyed responded that technology, from their perspective, has added just one more thing for lawyers to fight over.

Apparently, reasonable minds can, and already do, disagree about the benefits of AI to discovery. Fortunately, in light of the speed of technological advancements, we may not need to wait very long to see which side is right.

Data Discrimination: AI and Discrimination Protections

The final known unknown, and perhaps the most concerning, is the possibility of “data discrimination” through the reliance on impermissible types of personal data in legal and commercial decision-making.

When more personal information is more available, will it become possible, and possibly acceptable, to discriminate against people based on data they voluntarily surrender? Companies use AI-based credit scoring to decide which borrowers are creditworthy, and the insurance industry is heavily data-driven. While Big Data, the cousin to AI, helps businesses become better marketers and service providers, it can also allow them to discriminate.

Outside of the financial and insurance contexts, automated data decisions and processes are being subjected to legal scrutiny in actions involving communications transactions, workplace injuries, and other personal injuries. A federal district court in Missouri presiding over a consolidated class action involving the infamous adultery and dating website, Ashley Madison, held that the website’s use of computing technology to simulate a website user’s interaction with a real person could give rise to liability for fraud. In re Ashley Madison Customer Data Sec. Breach Litig., 148 F. Supp. 3d 1378, 1380 (J.P.M.L. 2015.) Some of the plaintiffs alleged that the website was engaging in fraudulent practices by having software respond to male user inquiries under the guise of being female users.

Other disputes at the intersection of AI and the law include claims brought by workers injured by robots on the job, and suits concerning the safety of surgical robots. See, O’Brien v. Intuitive Surgical, Inc., No. 10 C 3005, 2011 WL 3040479, at *1 (N.D. Ill. Jul. 25, 2011); Greenway v. St. Joseph’s Hosp., No. 03-CA-011667 (Fla. Cir. Ct. 2003). And while a Third Circuit panel held in 1984 that “robots cannot be sued,” (see United States v. Athlone Indust., Inc., 746 F.2d 977, 979 (3d Cir. 1984)), times have changed since 1984, and this holding may be ripe for fresh consideration in light of the technological advances we rely on to travel, communicate, and (apparently) online date.

As Justice Cuéllar also pointed out, consumers unflinchingly volunteer all manner of private and personal information for better user experiences. For example, many web browsers will save your credit card or bank account information and auto-populate payment fields to speed up online transaction processing. But, what if all this data volunteering and data insight make it more difficult for some people to get the information or resources they need? That was exactly the question posed by a recent 2016 Federal Trade Commission Report, “Big Data: A Tool for Inclusion or Exclusion?” https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf.

There are consumer protection laws in place, such as the Fair Credit Reporting Act and the Federal Trade Commission Act, that are applicable to Big Data analysis. Companies and attorneys need to keep these equal opportunity laws in mind to be sure they are compliant, and not encourage collection and usage of data in ways that infringe on privacy and discrimination protections.

In addition, companies should take the following steps to ensure they are compliant with the relevant laws: Check their data sets to ensure they are a representative sample of consumers; ensure that algorithms prioritize some level of fairness as required by the law; take measures to weed out implicit or explicit biases in the data; and check their Big Data outcomes against traditionally applied statistics practices. Such practices will not guarantee fairness—none can—but they will guard against the worst outcomes and abuses of clear legal authorities.

Conclusion

AI is here, and while its possibilities are inviting, its risks are patent and the legal questions it ubiquitously raises may require the active involvement of public and private sector stakeholders to sort out the thorny issues in play. AI offers exciting opportunities and great challenges for legal practitioners, and the constant evolution of the technology will require the law, which is said to often move at a glacial pace, to play catch-up to ensure that laws regulating privacy, criminal and civil rights, and consumer access are enforced and respected. While AI portends the thorniest of “known unknowns,” the law has dealt with rapid technological transformations before, and AI may require and inspire some of the most timely and innovative legal work to date.

James S. Azadian is a shareholder of Enterprise Counsel Group ALC in Irvine and serves as the chair of the firm’s Appellate, Writs, and Constitutional Law Practice Group. He can be reached at jazadian@enterprisecounsel.com. Garrett M. Fahy is an associate at Enterprise Counsel Group ALC. His practice focuses on business litigation, appeals, and election law disputes in California and federal forums. He can be reached at gfahy@enterprisecounsel.com.

Return