“CYBORG JUSTICE” AND THE RISK OF TECHNOLOGICAL-LEGAL LOCK-IN

“CYBORG JUSTICE” AND THE RISK OF TECHNOLOGICAL-LEGAL LOCK-IN

Introduction

An Estonian team is designing an artificially-intelligent (AI) agent to adjudicate claims of €7,000 or less, with the aim of clearing case backlog. 1 Eric Niiler, Can AI Be a Fair Judge in Court? Estonia Thinks So, WIRED (Mar. 25, 2019), ‌https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/ [https://perma.cc/Z8DH-APBL]. The pilot version will focus on contract disputes: An algorithm will ana­lyze uploaded documents to reach an initial decision, which can then be appealed to a human judge. 2 Id. This is but one of many examples of how AI is increasingly being incorporated into adjudicatory or adjudicatory-adja­cent processes, ranging from the relatively minor to the most significant determinations. 3 “Intelligence” is a slippery term. See Shane Legg & Marcus Hutter, A Collection of Definitions of Intelligence 2 (2007), https://arxiv.org/pdf/0706.3639.pdf [https://perma.cc/
K54T-DE34] (surveying various definitions of “intelligence”). This Piece uses the term “artifi­cial intelligence” broadly, encompassing varied systems and computer programs that can assist or replace human decisionmakers. Cf. Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305, 1307 (2019) [hereinafter Surden, Artificial Intelligence] (defining AI as “using technology to automate tasks that ‘normally require hu­man intelligence’”).
For example, DoNotPay, an AI-based chatbot originally designed to assist with parking ticket appeals, now offers a panoply of legal services, including helping individuals report suspected discrimina­tion, request maternity leave, and seek compensation for transit delays. 4 Sebastian Anthony, Chatbot Lawyer, Which Contested £7.2M in Parking Tickets, Now Offers Legal Help for 1,000+ Topics, Ars Technica (July 14, 2017), ‌https://arstechnica.com/tech-policy/2017/07/donotpay-chatbot-lawyer-homelessness/ [https://perma.cc/KWC5-V6XD]. Companies rely on automated processes to resolve customer complaints, 5 Rory Van Loo, The Corporation as Courthouse, 33 Yale J. on Reg. 547, 564–66 (2016) [hereinafter Van Loo, The Corporation as Courthouse]. cities depend on automated systems to generate citations for minor traffic violations, 6 Lawrence B. Solum, Artificially Intelligent Law, BioLaw J., no. 1, 2019, at 53, 58, http://rivista.biodiritto.org/ojs/index.php?journal=biolaw&page=article&op=view&path%5B%5D=351&path%5B%5D=317 (on file with the Columbia Law Review) [hereinafter Solum, Artificially Intelligent Law]. police departments engage data mining systems to predict and investigate crime, 7 Andrew Selbst, Disparate Impact in Big Data Policing, 52 Ga. L. Rev. 109, 113–14 (2017). and domestic judges employ algorithms through­out the criminal justice process. 8 See, e.g., John Logan Koepke & David G. Robinson, Danger Ahead: Risk Assessment and the Future of Bail Reform, 93 Wash. L. Rev. 1725, 1729 (2018); Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343, 1346–48 (2018). Meanwhile, militaries are incorporating increasingly autonomous systems in their targeting and detention decision­making structures. 9 Rebecca Crootof, The Killer Robots Are Here: Legal and Policy Implications, 36 Cardozo L. Rev. 1837, 1868–72 (2015); Ashley Deeks, Predicting Enemies, 104 Va. L. Rev. 1529, 1561–63 (2018).

Although AI is already of use to litigants, 10 See Anthony, supra note 4 (describing how an AI-driven chatbot provides free legal services). to legal practitioners, 11 See Surden, Artificial Intelligence, supra note 3, at 1329–31 (discussing the utility and limitations of AI-assisted document review in discovery). and in predicting legal outcomes, 12 Id. at 1331–32. we must be cautious and deliberate when incorporating AI into the common law judicial process. 13 There are similar and distinct reasons to be concerned about how algorithmic decisionmaking may affect government decisionmaking more broadly. See, e.g., Hannah Bloch-Wehba, Access to Algorithms, 88 Fordham L. Rev. (forthcoming 2019) (manuscript at 6–9), https://ssrn.com/abstract=3355776 (on file with the Columbia Law Review) (catalog­ing proprietary algorithmic governance programs and discussing the potential utility of transparency mechanisms); Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1252–55 (2008) (considering how to maintain due process protec­tions in an increasingly automated administrative state). Relatedly, concerns regarding these and private uses of algorithmic decisionmaking are prompting calls for regulating it writ large. E.g., Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. (forthcoming 2019) (manu­script at 104–10), https://ssrn.com/abstract_id=3351404 (on file with the Columbia Law Review) (proposing a hybrid model of individual process rights and systemic regulation through collaborative governance of algorithmic decisionmaking); Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1089–90 (2018) (cataloging concerns with algorithmic decisionmaking). While relevant to those discussions, this Piece focuses on the issues relevant to the use of AI in U.S. common law adjudication. As will be discussed in Part I, human beings and machine systems process infor­mation and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing often required of human judges. Nor will “cyborg justice” 14 “Cyborgs” are cybernetic organisms, beings with both organic and biomecha­tronic body parts. “Cyborg justice” systems are less colorfully described as “hybrid human–AI judicial systems.” They can be conceptualized generally as augmented decisionmaking systems, in which an AI assists a human judge as a kind of career law clerk, Eugene Volokh, Chief Justice Robots, 68 Duke L.J. 1135, 1141, 1148–49 (2019), or as a tiered system, in which AIs address low-level complaints and cases, which can then be appealed to a hu­man reviewer, Niiler, supra note 1. Of course, these structures could be com­bined: An AI could make an initial assessment, which could then be appealed to an AI-assisted human decisionmaker. Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242, 282–85 (2019), ‌https://www-cdn.law.stanford.edu/wp-content/uploads/2019/08/Re-Solow-Niederman_20190808.pdf [https://perma.cc/3SKY-ULYK]. —hybrid human–AI judicial systems that would attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea. 15 Many scholars are considering the implications of human–machine teaming in the context of adjudicatory decisionmaking. See, e.g., Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, 87 Geo. Wash. L. Rev. 1, 46–49 (2019) (cautioning against AI supplanting human decisionmakers); Re & Solow-Niederman, supra note 14, at 262–78 (“AI adjudication will raise at least four kinds of concern: incom­prehensibility, datafication, disillusionment, and alienation. Each concern relates to a distinctive aspect of human adjudication—namely, understanding, adaptation, trust, and participation.”); Volokh, supra note 14, at 1161–77 (responding to theoretical and practi­cal objections to the use of AI judges); Tim Wu, Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems, 119 Colum. L. Rev. 2001, 2001–05 (2019) (arguing that AI will be incorporated into, rather than replace, U.S. adjudicatory struc­tures); Kirsten Martin & Ari Ezra Waldman, Privacy and the Legitimacy of Automated Decision-Making 23 (unpublished manuscript) (on file with the Columbia Law Review) (finding empirically that “including either notification or human oversight removes the legitimacy penalty of automation”); cf. Paul Scharre, Centaur Warfighting: The False Choice of Humans vs. Automation, 30 Temp. Int’l & Comp. L.J. 151, 151 (2016) (arguing that “[h]ybrid human-machine cognitive architectures will be able to leverage the preci­sion and reliability of automation without sacrificing the robustness and flexibility of hu­man intelligence”); Andrew Selbst, Negligence and AI’s Human Users, 100 B.U. L. Rev. (forthcoming 2019) (manuscript at 44–45) [hereinafter Selbst, Negligence], https://ssrn.com/id=3350508 (on file with the Columbia Law Review) (highlighting that, while AI decision assistants are “sold as a remedy to the weakness of human decisionmak­ing,” incorporating AI “does not actually solve bounded rationality; rather, it transforms the problem . . . into an inability to completely oversee the decision mechanism”). Part II notes the benefits of such systems and outlines how team­ing may create new overtrust, undertrust, and inter­face design problems, as well as second-order, structural side effects. Part III explores one such side effect of hybrid human–AI judicial sys­tems, which I term “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. By augmenting the common law’s extant conservative bent, hybrid human–AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy.

Cyborg justice systems are proliferating, but their structure has not yet stabilized. We now have a bounded opportunity to design systems that realize the benefits and mitigate the issues associated with incorporating AI into the common law adjudicatory process. 16 Cf. Gaia Bernstein, When New Technologies Are Still New: Windows of Opportunity for Privacy Protection, 51 Vill. L. Rev. 921, 921 (2006) (noting that the possibil­ity of changing a technological design becomes less likely over time, due both to closure and path dependence); Meg Leta Jones, Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw, 2018 U. Ill. J.L. Tech. & Pol’y 249, 281 (discussing how the design and uses of new technologies are constructed differently in different societies).