Common Law for the Age of AI
Friday, April 5, 2019
8:00–8:45 AM, Jerome Greene Hall Room 105
Arrival & Breakfast
8:45 AM, Jerome Greene Hall Room 105
Gillian Lester, Dean, Columbia Law School
Jeffrey S. Stein, Symposium & Special Projects Editor, Columbia Law Review
9:00–10:20 AM, Jerome Greene Hall Room 105
Katherine J. Strandburg, NYU
Ashley Deeks, UVA
David Madigan, Columbia
Tali Farhadian Weinstein, Brooklyn DA
Jeannie Suk Gersen, Harvard
10:30–11:50 AM, Jerome Greene Hall Room 105
Responsibility & Liability
Jeanne C. Fromer, NYU
Mala Chatterjee, NYU
Frank Pasquale, Maryland
Colleen Chien, Santa Clara
Olga Russakovsky, Princeton
Bert I. Huang, Columbia
12:00–1:00 PM, Jerome Green Hall, Lenfest Café
1:10–1:50 PM, Jerome Greene Hall Room 105
The Honorable Mariano-Florentino Cuéllar, California Supreme Court Justice
Opening remarks by Eric Talley, Columbia
2:00–3:20 PM, Jerome Greene Hall Room 105
Public & Private
Kate Crawford, AI Now Institute
Jason M. Schultz, NYU
C. Scott Hemphill, NYU
Alexandra Chouldechova, Carnegie Mellon
Kareem Yusuf, IBM Watson IoT
Eric Talley, Columbia
3:20–3:40 PM, Jerome Green Hall, Lobby
3:40–4:40 PM, Jerome Greene Hall Room 105
Tim Wu, Columbia
Rebecca Crootof, Yale
Olivier Sylvain, Fordham
Bert I. Huang, Columbia
4:45–5:00 PM, Jerome Greene Hall Room 105
Mary Marshall, Editor-in-Chief, Columbia Law Review
5:15 PM, Jerome Green Hall, Lenfest Café
Minds, Machines, and the Law: The Case of Volition in Copyright Law
Mala Chatterjee & Jeanne C. Fromer
With the increasing prevalence of ever sophisticated technology—which permits machines to stand in for or augment humans in a growing number of contexts—the questions of whether, when, and how the so-called actions of machines can and should result in legal liability will also become more practically pressing. Although the law has yet to fully grapple with whether machines are (or can be) sufficiently human-like to be the subjects of law, philosophers have long been contemplating such questions. Philosophers have considered, for instance, whether human cognition is fundamentally computation—such that it is, in principle, possible for future artificial intelligences (AI) to possess the properties of human minds, including consciousness, semantic understanding, intention, and even morality—or if humans and machines are instead fundamentally different, no matter how sophisticated AI becomes. It is thus unsurprising that, in thinking through how the future of the law should accommodate and govern an AI-filled world, the lessons and frameworks to be gleaned from these philosophical discussions will have undeniable relevance.
One important set of questions that the law will inevitably need to confront is whether machines can have mental states, or—at least—something sufficiently like mental states for the purposes of the law. This is because a wide number of areas of law have explicit or implicit mental-state requirements for the incurrence of legal liability. For instance, consider assessing mens rea and actus reus in criminal law; whether there is agreement requisite to form a contract; whether a tort counts as intentional or reckless rather than negligent; and any other context in which states like intent and volition underpin liability. Whether machines can incur legal liability turns on whether a machine operates with a requisite mental state.
Consider copyright’s volitional-act requirement for infringement. Given the long history of mechanical copying, courts have already faced the question of whether machine copying can qualify as mental; and they have often answered with a resounding, unconditional no. But this Essay seeks to challenge any generalization that machines cannot operate with a mental state in the eyes of the law. Taking lessons from philosophical thinking about minds and machines—in particular, the conceptual distinction between “conscious” and “functional” properties of the mind—this Essay uses copyright’s volitional-act requirement as a case study to demonstrate that certain legal mental-state requirements might seek only to track the functional properties of the states in question, ones that can certainly be possessed by machines.
AI Systems as State Actors
Kate Crawford & Jason M. Schultz
A substantial literature has developed around how courts can or should apply current legal doctrines, such as procedural due process and equal protection, directly to government actors when they deploy algorithmic systems in ways that implicate individual rights. But we have seen very little attention given to how courts should hold the vendors of these technologies accountable when their technology assists the government in illegal uses of AI. This is especially important given that governments are increasingly turning to third-party vendors to provide the “intelligence” behind these systems. As such, when challenged, many state governments have disclaimed any knowledge or ability to understand, explain, or remedy problems created by these systems.
In response to this gap, we propose that courts adopt a modified version of the “state actor” doctrine that applies to vendors who supply AI systems for government decisionmaking. Invoking the doctrine’s “public function” test, we argue that much like other private actors who perform traditional government functions at the behest of the state, vendors who provide AI systems to public agencies in ways that directly influence decisions should be found as state actors for purposes of civil suits against them. We look at the legal theory and philosophy behind the application of state action as well as recent case law concerning private actors, including military contractors, private prison guards, and telecommunications providers. We also incorporate the findings from five recent case studies that we conducted during the summer and fall of 2018 concerning litigation involving algorithmic systems in Medicaid disability benefits, criminal risk assessments in sentencing, probabilistic DNA genotyping, and public employee termination. We argue that state action is an appropriate approach to AI vendor accountability for two reasons: (1) current common law and statutory approaches fail to adequately address the growing concerns about unfairness, bias, and disparate impact that these systems have demonstrated; and (2) from the viewpoint of causation and remedies, the state actor doctrine provides the appropriate scope, analysis, and precedent that modern courts will need to address civil rights concerns in the age of government use of AI.
The Judicial Demand for Explainable Artificial Intelligence
A recurrent concern about machine learning algorithms is that they operate as “black boxes.” Because these algorithms repeatedly adjust the way that they weigh inputs to improve the accuracy of their predictions, it often is difficult to identify how and why the algorithms reach the outcomes they do. Yet humans—and the law—often desire or demand answers to the questions “Why?” and “How do you know?” One way to address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or predictions. Sometimes called “explainable AI” (xAI), legal and computer science scholarship has identified various actors who should demand xAI. These include criminal defendants who receive long sentences based on opaque predictive algorithms, military commanders who are considering whether to deploy autonomous weapons, and doctors who worry about legal liability for using “black box” algorithms to make diagnoses. At the same time, there is a robust—but largely theoretical—debate about which algorithmic decisions require an explanation and which forms these explanations should take.
Although these conversations are critically important, they ignore a key set of actors who will interact with machine learning algorithms with increasing frequency and whose lifeblood is real-world controversies: judges. This Essay argues that judges will confront a variety of cases in which they should demand explanations for algorithmic decisions, recommendations, or predictions. If and as they demand these explanations, judges will play a seminal role in shaping the nature and form of xAI. Using the tools of the common law, courts can develop what xAI should mean in different legal contexts, including criminal, administrative, and tort cases. Further, there are advantages to having courts play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Moreover, courts are likely to stimulate—directly or indirectly—the production of different forms of xAI that are responsive to distinct legal settings and audiences. At a more theoretical level, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.
Breaking Up Facebook
C. Scott Hemphill
Algorithmic development is marked by substantial barriers to entry, due to economies of scale in the data used as an input and the computing environment in which the work is conducted. Moreover, success in one data-intensive business is an important input in developing the next. A concern therefore arises that innovation will be left mainly in the hands of the largest incumbents. In this Essay, I consider what role antitrust law—as a form of federal common law developed by generalist courts—might have to play in fostering competition and innovation in this field.
One major limit on antitrust enforcement under current law is that courts are skeptical about forced sharing, such as mandated interoperability, that might facilitate new entry. Skepticism is deployed as a way to safeguard the incentive to innovate by protecting its fruits, and to avoid false condemnation of conduct that is actually procompetitive. The Federal Trade Commission’s inconclusive investigation of Google’s conduct related to shopping and local search algorithms provides an apt illustration of these concerns.
At the same time, antitrust courts possess substantial authority to protect competition in this field. To explore this point, this Essay focuses on a pressing issue in antitrust policy: whether and when a court may force a data-rich incumbent to divest a previously acquired rival. The Essay’s primary example is Facebook’s acquisitions of Instagram and WhatsApp. I consider whether these mergers can and should be unwound under existing antitrust law.
Existing antitrust law provides a robust basis for undoing a consummated merger. In support of this conclusion, I separately examine two approaches: ordinary merger law, and the law of monopolization that has been deployed in the past to break up dominant firms, such as Standard Oil and AT&T. As this Essay explains, the latter approach has been neglected as a tool of merger enforcement. Along the way, I consider two objections: (1) that mergers, once consummated, eventually become “final” and hence beyond the purview of antitrust law; and (2) that antitrust enforcement is toothless as applied to information platforms because it is single- mindedly focused on consumer prices, and hence unable to make sense of harms to innovation or market settings in which a consumer pays with attention rather than cash. Both objections underestimate the common law capacity of courts to develop appropriate doctrine when warranted by the facts.
Data-Driven Duties in the Development of Artificial Intelligence
Corporations will increasingly attempt to substitute artificial intelligence (AI) and robotics for human labor. This evolution will create novel situations for tort law to address. However, tort will only be one of several types of law at play in the deployment of AI. Regulators will try to forestall problems by setting standards, and corporate lawyers will attempt to deflect liability via contractual disclaimers and exculpatory clauses. The interplay of tort, contract, and regulation will not just allocate responsibility ex post, spreading the costs of accidents among those developing and deploying AI, their insurers, and those they harm. This matrix of legal rules will also deeply influence the development of AI, including the industrial organization of firms and capital’s and labor’s relative share of productivity and knowledge gains.
This Article begins by describing torts that may arise thanks to the deployment of AI and robotics (and some which have already arisen). The focus is on one particular type of failing: the use of inaccurate or inappropriate data in training sets used for machine learning. Inspired by common analogies of algorithms to recipes, I explore the degree to which patterns of liability for spoiled or poisonous food may also inform our eventual treatment of inaccurate or inappropriate data in AI systems. Health privacy regulation also provides important lessons for assuring the appropriateness and quality of data used in patient care, randomized trials, and observational research. The history of both health data and food regulation is instructive: Egregious failures not only give rise to tort liability but also catalyze regulatory commitments to prevent the problems which sparked that liability, which in turn helped create new standards of care.
It is wise to preserve the complementarity of tort law and regulation rather than opting to radically diminish or increase the role of either of these modalities of social order (as preemption, sweeping exculpatory clauses, or deregulation might do). AI law and policy should create ongoing incentives for a diverse and inclusive set of individuals to both understand and control the development and application of automation. Without imposing robust legal duties on the developers of AI, there is little chance of ensuring accountable technological development in this field. By focusing on the fundamental inputs to AI—the data used to promote machine learning—both judges and policymakers can channel the development of AI to respect, rather than evade, core legal values of fairness, due process, and equal treatment.
Justifying Machine Learning Based Decisions: Insights from Legal Reason-Giving
Katherine J. Strandburg
Automated algorithms, often derived from large troves of personal information through machine learning processes, are increasingly deployed by both public and private actors to guide—and sometimes make—decisions about individuals. The rapid adoption of these automated decisionmaking tools has stirred both hopes that they will improve decisionmaking by avoiding human biases and cognitive fallacies, and concerns that they may undermine important social values. One important set of concerns revolves around the extent to which the bases for decisions employing these tools can be understood and interrogated by the affected individuals, the decisionmakers employing the tools, and the public at large. The European Union’s newly enforceable General Data Protection Regulation arguably confers some “right to explanation,” giving some urgency to the debate about what it means to “explain” an algorithm- based decision.
Human decisionmaking has been used on both sides of this debate primarily as a foil. Skeptics argue that automated algorithms deprive us of the explanations required of human decisionmakers, while algorithm enthusiasts counter that human decisionmakers themselves are opaque “black boxes.” This Article accepts the enthusiasts’ critique, while arguing that it largely misses the point. Human decisionmakers’ true rationales for particular decisions may indeed be hidden or simply unknowable, as well as infected with explicit or implicit biases, cognitive limitations, and mistakes. It is precisely because of these issues that reason-giving is such an important part of government decisionmaking. While framed in various ways, calls for explanation in the context of automated decisionmaking algorithms arise from the same sort of healthy skepticism—directed at both the automated algorithms themselves and the human beings who create and deploy them. Legal explanation requirements demand justifications, rather than descriptions of the mental processes going on in decisionmakers’ minds. Justification incorporates a number of important values, including accuracy, fairness, and legitimacy. The intended functions of explanation in legal theory and practice provide a useful prism through which to consider the call for explainable algorithms. This Article considers whether and how these functions are relevant to decisions involving automated algorithms; whether “explanation” in some form remains an appropriate mechanism for performing those functions; and what alternative mechanisms might be available if “explanation” is inappropriate or infeasible.
Will Artificial Intelligence Eat the Common Law?
Software is eating the world; will artificial intelligence eat the common law? That is the question examined by this Essay. The answer is yes, and for reasons discussed below, the time may come, the time is coming—it may already be here—when we may need a principle to keep some categories of decisions within the realm of human judgment.
This Essay dwells on one case study—hate speech—an area where, in fact, we have already witnessed a displacement not just of judge-made law, but also much displacement of authoritative human judgment, at least on a first pass. The question of how entities like Facebook and Twitter regulate hate speech using software is at the frontier of the jurisprudential questions created in this area, as well as the struggle to find appropriate means of human oversight over important decisions.