WILL ARTIFICIAL INTELLIGENCE EAT THE LAW? THE RISE OF HYBRID SOCIAL-ORDERING SYSTEMS

WILL ARTIFICIAL INTELLIGENCE EAT THE LAW? THE RISE OF HYBRID SOCIAL-ORDERING SYSTEMS

Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Go. What are the prospects for the displacement of human courts as the centerpiece of legal decisionmaking? Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well.

The full text of this essay may be found by clicking the PDF link to the left.

Introduction

Many of the developments that go under the banner of artificial intelli­gence that matter to the legal system are not so much new means of breaking the law but of bypassing it as a means of enforcing rules and resolving disputes. 1 In this Essay, the broader meaning of “artificial intelligence” is used—namely a com­puter system that is “able to perform tasks normally requiring human intelligence” such as decisionmaking. Artificial Intelligence, Lexico, https://www.lexico.com/en/definition/
artificial_intelligence [https://perma.cc/86XR-2JZ8] (last visited July 31, 2019).
Hence a major challenge that courts and the legal system will face over the next few decades is not only the normal chal­lenge posed by hard cases but also the more existential challenge of super­session. 2 A small legal literature on these problems is emerging. See, e.g., Michael A. Livermore, Rule by Rules, in Computational Legal Studies: The Promise and Challenge of Data-Driven Legal Research (Ryan Whalen, ed.) (forthcoming 2019) (manuscript at 1–2), https://papers.ssrn.com/abstract=3387701 (on file with the Columbia Law Review); Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242, 247–62 (2019); Eugene Volokh, Chief Justice Robots, 68 Duke L.J. 1135, 1147–48 (2019).

Here are a few examples. The control of forbidden speech in major fora, if once the domain of law and courts, has been moving to algo­rithmic judgment in the first instance. 3 See infra Part II. Speeding is widely detected and punished by software. 4 See Jeffrey A. Parness, Beyond Red Light Enforcement Against the Guilty but Innocent: Local Regulations of Secondary Culprits, 47 Willamette L. Rev. 259, 259 (2011) (“Automated traffic enforcement schemes, employing speed, and red light cameras, are increasingly used by local governments in the United States.” (footnote omitted)). Much enforcement of the intellectual property laws is already automated through encryption, copy protection, and auto­mated takedowns. 5 See infra Part I. Public prices once set by agencies (like taxi prices) are often effectively supplanted by prices set by algorithm. 6 See Jessica Leber, The Secrets of Uber’s Mysterious Surge Pricing Algorithm, Revealed, Fast Company (Oct. 29, 2015), https://www.fastcompany.com/3052703/the-se­crets-of-ubers-mysterious-surge-pricing-algorithm-revealed [https://perma.cc/H7MB-SC8T]. Blockchain agreements are beginning to offer an alternative mechanism to contract law for the forging of enforceable agreements. 7 See Eric Talley & Tim Wu, What Is Blockchain Good for? 5–9 (Feb. 28, 2018) (unpublished manuscript) (on file with the Columbia Law Review). Software already plays a role in bail determination and sentencing, 8 See Algorithms in the Criminal Justice System, Elec. Privacy Info. Ctr., https://epic.org/algorithmic-transparency/crim-justice/ [https://perma.cc/KY2D-NQ2J] (last visited Apr. 23, 2019). and some are asking whether software will replace lawyers for writing briefs, and perhaps even replace judges. 9 See Volokh, supra note 2, at 1144–48, 1156–61.

Are human courts just hanging on for a few decades until the soft­ware gets better? Some might think so, yet at many points in Anglo American legal history, courts have been thought obsolete, only to main­tain their central role. There are, it turns out, advantages to adjudication as a form of social ordering that are difficult to replicate by any known means. 10 See Lon L. Fuller, The Forms and Limits of Adjudication, 92 Harv. L. Rev. 353, 357 (1978). This Essay predicts, even in areas in which software has begun to govern, that human courts 11 This Essay uses “courts” to refer to any adjudicative body, public or private, that resolves a dispute after hearing reasoned argument and explains the basis of its decision. will persist or be necessarily reinvented. It predicts, however, that human–machine hybrids will be the first replace­ment for human-only legal systems, and suggests, if done right, that there lies real promise in that approach. The case study of content control on online platforms and Facebook’s review board is used to support these descriptive and normative claims.

The prediction that courts won’t wholly disappear may seem an easy one, but what’s more interesting is to ask why, when software is “eating” so many other areas of human endeavor. Compared with the legal sys­tem, software has enormous advantages of scale and efficacy of enforce­ment. It might tirelessly handle billions if not trillions of decisions in the time it takes a human court to decide a single case. And even more im­portantly, the design of software can function as an ex ante means of order­ing that does not suffer the imperfections of law enforcement. 12 Lawrence Lessig, Code: Version 2.0, at 124–25 (2006).

But human courts have their own advantages. One set of advantages, more obvious if perhaps more fragile, is related to procedural fairness. As between a decision made via software and court adjudication, the latter, even if delivering the same results, may yield deeper acceptance and greater public satisfaction. 13 See generally E. Allan Lind & Tom R. Tyler, The Social Psychology of Procedural Justice (Melvin J. Lerner ed., 1988) (providing a classic overview of the advantages human courts have over software because of procedural fairness considerations in human courts, potentially ignored by software, that lead to acceptance of their decisions by the public). In the future, the very fact of human deci­sion—especially when the stakes are high—may become a mark of fair­ness. 14 See Aziz Z. Huq, A Right to a Human Decision, 105 Va. L. Rev. (forthcoming 2020) (manuscript at 21–22), https://ssrn.com/abstract=3382521 (on file with the Columbia Law Review) (explaining that concerns about transparency have led some to demand human deci­sions). That said, society has gotten used to software’s replacement of humans in other areas, such as the booking of travel or the buying and selling of stocks, so this advantage may be fragile.

A second, arguably more lasting advantage lies in human adjudi­cation itself and its facility for “hard cases” that arise even in rule-based systems. Most systems of social ordering consist of rules, and a decisional system that was merely about obeying rules might be replaced by software quite easily. But real systems of human ordering, even those based on rules, aren’t like that. 15 This refers to a caricatured version of H.L.A. Hart’s view of what law is. See H.L.A. Hart, The Concept of Law 2–6 (Peter Cane, Tony Honoré & Jane Stapleton eds., 2d ed. 1961). Instead, disputes tend to be comprised of both “easy cases”—those covered by settled rules—and the aforementioned “hard cases”—disputes in which the boundaries of the rules become un­clear, or where the rules contradict each other, or where enforcement of the rules implicates other principles. 16 Cf. Ronald Dworkin, Hard Cases, in Taking Rights Seriously 81, 81 (1978) (stating that when “hard cases” fall on the edge of clear rules, judges have the discretion to decide the case either way). There is often a subtle difference between the written rules and “real rules,” as Karl N. Llewellyn put it. 17 Karl N. Llewellyn, The Theory of Rules 72–74 (Frederick Schauer ed., 2011). Hence, a software system that is instructed to “follow the rules” will pro­duce dangerous or absurd results.

Justice Cardozo argued that the judicial process “in its highest reaches is not discovery, but creation . . . [by] forces . . . seldom fully in consciousness.” 18 Benjamin N. Cardozo, The Nature of the Judicial Process 166–67 (1921). Better results in hard cases may for a long time still de­pend instead on accessing something that remains, for now, human—that something variously known as moral reasoning, a sensitivity to evolv­ing norms, or a pragmatic assessment of what works. It is, in any case, best expressed by the idea of exercising “judgment.” And if the courts do indeed have a special sauce, that is it.

It is possible that even this special sauce will, in time, be replicated by software, yielding different questions. 19 See Huq, supra note 14, at 18; Volokh, supra note 2, at 1166–67, 1183–84. But as it stands, artificial intelli­gence (AI) systems have mainly succeeded in replicating human decisionmaking that involves following rules or pattern matching—chess and Jeopardy! are two examples. 20 See John Markoff, Computer Wins on ‘Jeopardy!’: Trivial, It’s Not, N.Y. Times (Feb. 16, 2011), https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html (on file with the Columbia Law Review). It would risk embarrassment to argue that machines will never be able to make or explain reasoned decisions in a legal context, but the challenges faced are not trivial or easily over­come. 21 See Drew McDermott, Why Ethics Is a High Hurdle for AI 2 (2008), http://www.cs.yale.edu/homes/dvm/papers/ethical-machine.pdf [https://perma.cc/PZ7Z-5QFN]; Adam Elkus, How to Be Good: Why You Can’t Teach Human Values to Artificial Intelligence, Slate (Apr. 20, 2016), https://slate.com/technology/2016/04/why-you-cant-teach-human-values-to-artificial-intelligence.html [https://perma.cc/4LR9-Q2GH]. But see IBM’s project debater, a program that presents arguments on one side of an issue, and thereby might be thought to replicate aspects of lawyering. Project Debater, IBM, https://www.research.ibm.com/artificial-intelligence/project-debater/ [https://perma.cc/4A2W-XSB9] (last visited July 31, 2019). And even if software gets better at understanding the nuances of language, it may still face the deeper, jurisprudential challenges de­scribed here. That suggests that, for the foreseeable future, software systems that aim to replace systems of social ordering will succeed best as human–machine hybrids, mixing scale and efficacy with human adjudi­cation for hard cases. They will be, in an older argot, “cyborg” systems of social ordering. 22 The Merriam-Webster dictionary defines “cyborg” as “a bionic human,” meaning a being comprised of mixed human and machine elements, like the fictional character “Darth Vader” from the twentieth-century film series Star Wars. See Cyborg, Merriam-Webster Dictionary, https://www.merriam-webster.com/dictionary/cyborg [https://perma.cc/FD6A-UWZ2] (last visited Aug. 31, 2019).

When we look around, it turns out that such hybrid systems are al­ready common. Machines make the routine decisions while leaving the hard cases for humans. A good example is the flying of an airplane, which, measured by time at the controls, is now mostly done by com­puters, but sensitive, difficult, and emergency situations are left to a hu­man pilot. 23 Reem Nasr, Autopilot: What the System Can and Can’t Do, CNBC (Mar. 27, 2015), https://www.cnbc.com/2015/03/26/autopilot-what-the-system-can-and-cant-do.html [https://perma.cc/Q7CT-GBLN].

The descriptive thesis of this Essay is supported by a case study of con­tent control (the control of hate speech, obscenity, and other speech) on online platforms like Facebook and Twitter. Despite increasing auto­mation, the generation of hard questions has yielded the development, by the major platforms, of deliberative bodies and systems of appeal, such as Facebook’s prototype content review board, designed to rule on the hardest of speech-related questions. In the control of online speech, and in the autopilot, we may be glimpsing, for better or worse, the future of social ordering in advanced societies.

While automated justice may not sound appealing on its face, there is some real promise in the machine–human hybrid systems of social order­ing described here. At their best, they would combine the scale and effectiveness of software with the capacity of human courts to detect errors and humanize the operation of the legal system. As Lon Fuller put it, human courts are “a device which gives formal and institutional ex­pression to the influence of reasoned argument in human affairs.” 24 Fuller, supra note 10, at 366. What might the court system look like without scaling problems—if rou­tine decisions went to machines, reducing the court’s own workload as a major factor in decisionmaking? To be sure, what could arise are inhu­mane, excessively rule-driven systems that include humans as mere to­kens of legitimization, 25 This is a concern expressed in Re & Solow-Niederman, supra note 2, at 246–47. but hopefully we can do better than that.

This Essay provides advice both for designers of important AI-decision systems and for government courts. As for the former, many AI systems outside of the law have aspired to a complete replacement of the underlying humans (the self-driving car, 26 Alex Davies, The WIRED Guide to Self-Driving Cars, WIRED (Dec. 13, 2018), https://www.wired.com/story/guide-self-driving-cars/ [https://perma.cc/7P3A-5NLE]. the chess-playing AI 27 Natasha Regan & Matthew Sadler, DeepMinds’s Superhuman AI Is Rewriting How We Play Chess, WIRED (Feb. 3, 2019), https://www.wired.co.uk/article/deepmind-ai-chess [https://perma.cc/WF5Z-VEV7]. ). But when it comes to systems that replace the law, designers should be think­ing harder about how best to combine the strengths of humans and ma­chines, by understanding the human advantages of providing a sense of procedural fairness, explainability, and the deciding of hard cases. That suggests the deliberate preservation of mechanisms for resort to human adjudication (either public or private) as part of a long-term, sustainable system.

Human courts, meanwhile, should embark on a greater effort to auto­mate the handling of routine cases and routine procedural matters, like the filing of motions. The use of intelligent software for matters like sentencing and bail—decisions with enormous impact on people’s lives—seems exactly backward. The automation of routine procedure might help produce both a much faster legal system and also free up the scarce resource of highly trained human judgment to adjudicate the hard cases, or to determine which are the hard cases. Anyone who has worked in the courts knows that the judiciary’s mental resources are squandered on thousands of routine matters; there is promise in a system that leaves judges to do what they do best: exercising judgment in the individual case, and humanizing and improving the written rules. This also implies that judges should seek to cultivate their comparative advantage, the exer­cise of human judgment, instead of trying to mimic machines that follow rules. 28 Cf. Kathryn Judge, Judges and Judgment: In Praise of Instigators, 86 U. Chi. L. Rev. (forthcoming 2019) (manuscript at 1–2), https://ssrn.com/abstract=3333218 (on file with the Columbia Law Review) (describing Judge Richard Posner’s rejection of such ma­chine-like jurisprudence).

Part I frames software decisionmaking among its competitors as a sys­tem of social ordering. Part II introduces the case study of Twitter and Facebook’s handling of hate speech, focusing on the evolution of online norms, and the subsequent adoption of a hybrid human–software system to control speech. Part III assesses, from a normative perspective, the hy­brid systems described in Part II.