RULEMAKING AND INSCRUTABLE AUTOMATED DECISION TOOLS
Katherine J. Strandburg*
Complex machine learning models derived from personal data are increasingly used in making decisions important to peoples’ lives. These automated decision tools are controversial, in part because their operation is difficult for humans to grasp or explain. While scholars and policymakers have begun grappling with these explainability concerns, the debate has focused on explanations to decision subjects. This Essay argues that explainability has equally important normative and practical ramifications for decision-system design. Automated decision tools are particularly attractive when decisionmaking responsibility is delegated and distributed across multiple actors to handle large numbers of cases. Such decision systems depend on explanatory flows among those responsible for setting goals, developing decision criteria, and applying those criteria to particular cases. Inscrutable automated decision tools can disrupt all of these flows.
This Essay focuses on explanation’s role in decision-criteria development, which it analogizes to rulemaking. It analyzes whether, and how, decision tool inscrutability undermines the traditional functions of explanation in rulemaking. It concludes that providing information about the many aspects of decision tool design, function, and use that can be explained can perform many of those traditional functions. Nonetheless, the technical inscrutability of machine learning models has significant ramifications for some decision contexts. Decision tool inscrutability makes it harder, for example, to assess whether decision criteria will generalize to unusual cases or new situations and heightens communication and coordination barriers between data scientists and subject matter experts. The Essay concludes with some suggested approaches for facilitating explanatory flows for decision-system design.
* Alfred Engelberg Professor of Law and Faculty Director of the Information Law Institute, New York University School of Law. I am grateful for excellent research assistance from Madeline Byrd and Thomas McBrien and for summer research funding from the Filomen D. Agostino and Max E. Greenberg Research Fund.
The full text of this essay may be found by clicking the PDF link to the left.
Machine learning models derived from large troves of personal data are increasingly used in making decisions important to peoples’ lives.
Alfred Engelberg Professor of Law and Faculty Director of the Information Law Institute, New York University School of Law. I am grateful for excellent research assistance from Madeline Byrd and Thomas McBrien and for summer research funding from the Filomen D. Agostino and Max E. Greenberg Research Fund.. See Max Fisher & Amanda Taub, Is the Algorithmification of the Human Experience a Good Thing?, N.Y. Times: The Interpreter (Sept. 6, 2018), https://static.nytimes.com/email-content/INT_5362.html (on file with the Columbia Law Review).
These tools have stirred both hopes of improving decisionmaking by avoiding human shortcomings and concerns about their potential to amplify bias and undermine important social values.
Compare Susan Wharton Gates, Vanessa Gail Perry & Peter M. Zorn, Automated Underwriting in Mortgage Lending: Good News for the Underserved?, 13 Housing Pol’y Debate 369, 370 (2002) (finding that automated underwriting systems more accurately predict mortgage default than humans and result in higher approval rates for underserved applicants), and Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig & Sendhil Mullainathan, Human Decisions and Machine Predictions, 133 Q.J. Econ. 237, 268 (2017) (showing that applying machine learning algorithms to pretrial detention decisions could reduce the jailed population by forty-two percent without an increase in crime), with Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 9, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [https://perma.cc/6SA7-R35L] (“Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”).
It is often hard for humans to grasp or explain how or why machine-learning-based models map input features to output predictions because they often combine large numbers of input features in complicated ways.
See, e.g., Finale Doshi-Velez & Mason Kortz, Accountability of AI Under the Law: The Role of Explanation 9–10 (2017), https://cyber.harvard.edu/publications/
2017/11/AIExplanation [https://perma.cc/AQ5V-582E]; Jenna Burrell, How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms, Big Data & Soc’y, Jan.–June 2016, at 1, 3; Aaron M. Bornstein, Is Artificial Intelligence Permanently Inscrutable?, Nautilus (Sept. 1, 2016), http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable [https://perma.cc/B562-NCUN]; see also Info. Law Inst. at N.Y. Univ. Sch. of Law with Foster Provost, Krishna Gummadi, Anupam Datta, Enrico Bertini, Alexandra Chouldechova, Zachary Lipton & John Nay, Modes of Explanation in Machine Learning: What Is Possible and What Are the Tradeoffs?, in Algorithms and Explanations (Apr. 27, 2017), https://youtu.be/U0NsxZQTktk (on file with the Columbia Law Review).
This inherent inscrutability
See Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1094 (2018) (defining “inscrutability” in this context as “a situation in which the rules that govern decision-making are so complex, numerous, and interdependent that they defy practical inspection and resist comprehension”).
has drawn the attention of data scientists,
See generally Finale Doshi-Velez & Been Kim, Towards a Rigorous Science of Interpretable Machine Learning, in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics 1 (2018) (cataloging various ways to define and evaluate interpretability in machine learning); Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter & Lalana Kagal, Explaining Explanations: An Overview of Interpretability of Machine Learning, in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics 80 (2018) (“While interpretability is a substantial first step, these mechanisms need to also be complete, with the capacity to defend their actions, provide relevant responses to questions, and be audited.”); Zachary C. Lipton, The Mythos of Model Interpretability, ACMQueue (July 17, 2018), https://queue.acm.org/detail.cfm?id=3241340 [https://perma.cc/CZH3-S9JG] (discussing “the feasibility and desirability of different notions of interpretability” in machine learning).
See, e.g., Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For, 16 Duke L. & Tech. Rev. 18, 19–22 (2017); Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson & Harlan Yu, Accountable Algorithms, 165 U. Pa. L. Rev. 633, 636–42 (2017); Selbst & Barocas, supra note 4; Andrew D. Selbst, Response, A Mild Defense of Our New Machine Overlords, 70 Vand. L. Rev. En Banc 87, 88–89 (2017), https://cdn.vanderbilt.edu/vu-wp0/wp content/uploads/sites/278/2017/05/
23184939/A-Mild-Defense-of-Our-New-Machine-Overlords.pdf [https://perma.cc/MCW7-X89L]; Tal Z. Zarsky, Transparent Predictions, 2013 U. Ill. L. Rev. 1503, 1506–09; Robert H. Sloan & Richard Warner, When Is an Algorithm Transparent?: Predictive Analytics, Privacy, and Public Policy, IEEE Security & Privacy, May/June 2018, at 18, 18.
See, e.g., Algorithmic Accountability Act of 2019, S. 1108, 116th Cong. (2019).
See, e.g., Reuben Binns, Algorithmic Accountability and Public Reason, 31 Phil. & Tech. 543, 543–45 (2018); Tim Miller, Explanation in Artificial Intelligence: Insights from the Social Sciences, 267 Artificial Intelligence 1, 1–2 (2019); Brent Mittelstadt, Chris Russell & Sandra Wachter, Explaining Explanations in AI, in FAT*’19 at 279, 279 (2019); Deirdre K. Mulligan, Daniel N. Kluttz & Nitin Kohli, Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions, in After the Digital Tornado (Kevin Werbach ed., forthcoming 2020) (manuscript at 1–2), https://ssrn.com/
abstract=3311894 (on file with the Columbia Law Review); Sandra Wachter, Brent Mittelstadt & Chris Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, 31 Harv. J.L. & Tech. 841, 842–44 (2018); Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi, The Ethics of Algorithms: Mapping the Debate, Big Data & Soc’y, July–Dec. 2016.
to the explainability problem.
This discourse has focused primarily on explanations provided to decision subjects. For example, the European Union’s General Data Protection Regulation (GDPR) arguably gives decision subjects a “right to explanation,”
The GDPR requires that data subjects be informed of “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Commission Regulation 2016/679, art. 13(2)(f), 2016 O.J. (L 119) 1.
It further provides a limited “right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Id. art. 22(1). For the debate about what the GDPR’s requirements entail, see, e.g., Bryan Casey, Ashkon Farhangi & Roland Vogl, Rethinking Explainable Machines: The GDPR’s “Right to Explanation” Debate and the Rise of Algorithmic Audits in Enterprise, 34 Berkeley Tech. L.J. 143, 153–68 (2019); Talia B. Gillis & Josh Simons, Explanation < Justification: GDPR and the Perils of Privacy, Pa. J.L. & Innovation (forthcoming 2019) (manuscript at 2–4), https://ssrn.com/abstract=3374668 (on file with the Columbia Law Review); Margot E. Kaminski, The Right to an Explanation, Explained, 34 Berkeley. Tech. L.J. 189, 192–93 (2019); Andrew D. Selbst & Julia Powles, Meaningful Information and the Right to Explanation, 7 Int’l Data Privacy L. 233, 233–34 (2017); Michael Veale & Lilian Edwards, Clarity, Surprises, and Further Questions in the Article 29 Working Part Draft Guidance on Automated Decision-Making and Profiling, 34 Computer L. & Security Rev. 398, 398–99 (2018); Wachter et al., supra note 8, at 861–65; Andy Crabtree, Lachlan Urquhart & Jiahong Chen, Right to an Explanation Considered Harmful (Apr. 8, 2019) (unpublished manuscript), https://ssrn.com/abstract=3384790 (on file with the Columbia Law Review).
reflecting the common premise that “[t]o justify a decision-making procedure that involves or is constituted by a machine learning model, an individual subject to that decision-making procedure requires an explanation of how the machine learning model works.”
Gillis & Simons, supra note 9 (manuscript at 11) (emphasis added).
Some scholars have criticized this focus, emphasizing the importance of public accountability.
For the most part, this emphasis is recent. See, e.g., Doshi-Velez & Kortz, supra note 3, at 3–9 (describing the explanation system’s role in public accountability); Hannah Bloch-Wehba, Access to Algorithms, 88 Fordham L. Rev. (forthcoming 2019) (manuscript at 4–9), https://ssrn.com/abstract=3355776 (on file with the Columbia Law Review) (“These features . . . have prompted calls for new mechanisms of transparency and accountability in the age of algorithms.”); Robert Brauneis & Ellen P. Goodman, Algorithmic Transparency for the Smart City, 20 Yale J.L. & Tech. 103, 132 (2018) (“Such accountability requires not perfect transparency . . . but . . . meaningful transparency.”); Gillis & Simons, supra note 9 (manuscript at 11–12) (“Explanations of machine learning models are certainly not sufficient for many of the most important forms of justification in modern democracies . . . .”); Selbst & Barocas, supra note 4, at 1087 (“[F]aced with a world increasingly dominated by automated decision-making, advocates, policymakers, and legal scholars would call for machines that can explain themselves.”); Jennifer Cobbe, Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making, Legal Stud. (July 9, 2019), https://www.cambridge.org/core/journals/legal-studies/article/administrative-law-and-the-machines-of-government-judicial-review-of-automated-publicsector-decisionmaking/09CD6B470DE4ADCE3EE8C94B33F46FCD/core-reader (on file with the Columbia Law Review) (“Legal standards and review mechanisms which are primarily concerned with decision-making processes, which examine how decisions were made, cannot easily be applied to opaque, algorithmically-produced decisions.”). But, for a truly pathbreaking consideration of these issues, see Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1258 (2008) (“This technological due process provides new mechanisms to replace the procedural regimes that automation endangers.”).
Talia Gillis and Josh Simons, for example, contrast “[t]he focus on individual, technical explanation . . . driven by an uncritical bent towards transparency” with their argument that “[i]nstitutions should justify their choices about the design and integration of machine learning models not to individuals, but to empowered regulators or other forms of public oversight bodies.”
Gillis & Simons, supra note 9 (manuscript at 6–12); see also David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 708–09 (2017) (emphasizing the many choices involved in implementing a machine learning model and the different sorts of explanations that could be made).
Taken together, these threads suggest the view of explanatory flows in decisionmaking illustrated in Figure 1, in which decisionmakers justify their choices by explaining case-by-case outcomes to decision subjects and separately explaining design choices regarding automated decision tools to the public and oversight bodies.
Figure 1: Schematic of Explanatory Flows in a Simple Decision System
Many real-world decision systems require significantly more complex explanatory flows, however, because decisionmaking responsibility is delegated and distributed across multiple actors to handle large numbers of cases. Delegated, distributed decision systems commonly include agenda setters, who determine the goals and purposes of the systems; rulemakers tasked with translating agenda setters’ goals into decision criteria; and adjudicators, who apply those criteria to particular cases.
The terms “adjudication” and “rulemaking” are borrowed, loosely, from administrative law. See 5 U.S.C. § 551 (2012); see also, e.g., id. §§ 553–557. The general paradigm in Figure 2 also describes many private decision systems.
In democracies, the ultimate agenda setter for government decisionmaking is the public, often represented by legislatures and courts. The public also has a role in agenda setting for many private decision systems, such as those related to employment and credit.
See infra section III.B.2.
Figure 2 illustrates the explanatory flows required by a delegated, distributed decision system.
Figure 2: Schematic of Explanatory Flows in a Delegated, Distributed Decision System
Delegation and distribution of decisionmaking authority, while often necessary and effective for dealing with agenda setters’ limited time and expertise, proliferate explanatory information flows. Delegation, whether from the public or a private agenda setter, creates the potential for principal–agent problems and hence the need for accountability mechanisms.
See Kathleen M. Eisenhardt, Agency Theory: An Assessment and Review, 14 Acad. Mgmt. Rev. 57, 61 (1989) (“The agency problem arises because (a) the principal and the agent have different goals and (b) the principal cannot determine if the agent has behaved appropriately.”); see also Gillis & Simons, supra note 9 (manuscript at 6–10) (arguing for a principal–agent framework of accountability in considering government use of machine learning).
Explanation requirements, including a duty to inform principals of facts that “the principal would wish to have” or “are material to the agent’s duties,” are basic mechanisms for ensuring that agents are accountable to principals.
Restatement (Third) of Agency § 8.11 (Am. Law Inst. 2005). Distribution of responsibility multiplies these principal–agent concerns, while adding an underappreciated layer of explanatory flows necessary for coordination among decision-system actors.
See supra Figure 2.
Automated decision tools are particularly attractive to designers of delegated, distributed decision systems because their deployment promises to improve consistency, decrease bias, and lower costs.
See, e.g., Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147, 1160 (2017) [hereinafter Coglianese & Lehr, Regulating by Robot] (“Despite this interpretive limitation, machine-learning algorithms have been implemented widely in private-sector settings. Companies desire the savings in costs and efficiency gleaned from these techniques . . . .”).
For example, such tools are being used or considered for decisions involving pretrial detention,
See, e.g., Jessica M. Eaglin, Constructing Recidivism Risk, 67 Emory L.J. 59, 61 (2017).
See, e.g., State v. Loomis, 881 N.W.2d 749, 753 (Wis. 2016).
See, e.g., Dan Hurley, Can an Algorithm Tell When Kids Are in Danger?, N.Y. Times Mag. (Jan. 2, 2018), https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.html (on file with the Columbia Law Review).
See, e.g., Matthew Adam Bruckner, The Promise and Perils of Algorithmic Lenders’ Use of Big Data, 93 Chi.-Kent L. Rev. 3, 12–13 (2018).
See, e.g., Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860 (2017).
and tax auditing.
See, e.g., Kimberly A. Houser & Debra Sanders, The Use of Big Data Analytics by the IRS: Efficient Solutions or the End of Privacy as We Know It?, 19 Vand. J. Ent. & Tech. L. 817, 819–20 (2017).
Unfortunately, the inscrutability of many machine-learning-based decision tools creates barriers to all of the explanatory flows illustrated in Figure 2.
See infra section IV.B.
Expanding the focus of the explainability debate to include public accountability is thus only one step toward a more realistic view of the ramifications of decision tool inscrutability. Before incorporating machine-learning-based decision tools into a delegated, distributed decision system, agenda setters should have a clear-eyed view of what information is feasibly available to all of the system’s actors. This would enable them to assess whether that information, combined with other mechanisms, can provide a sufficient level of accountability
See, e.g., Bloch-Wehba, supra note 11 (manuscript at 27–28) (discussing the challenge of determining adequate public disclosure of algorithm-based government decisionmaking); Brauneis & Goodman, supra note 11, at 166–67 (“Governments should consciously generate—or demand that their vendors generate—records that will further public understanding of algorithmic processes.”); Citron, supra note 11, at 1305–06 (arguing that mandatory audit trails “would ensure that agencies uniformly provide detailed notice to individuals”); Gillis & Simons, supra note 9 (manuscript at 2) (“Accountability is achieved when an institution must justify its choices about how it developed and implemented its decision-making procedure, including the use of statistical techniques or machine learning, to an individual or institution with meaningful powers of oversight and enforcement.”); Selbst & Barocas, supra note 4, at 1138 (“Where intuition fails, the task should be to find new ways to regulate machine learning so that it remains accountable.”).
and coordination to justify the use of a particular automated decision tool in a particular context.
Incorporating inscrutable automated decision tools has ramifications for all stages of delegated, distributed decisionmaking. This Essay focuses on the implications for the creation of decision criteria-–-or rulemaking.
Elsewhere, I focus on the implications for adjudication. Katherine J. Strandburg, Adjudicating with Inscrutable Decision Rules, in Machine Learning and Society: Impact, Trust, Transparency (Marcello Pelillo & Teresa Scantamburlo eds., forthcoming 2020) (on file with the Columbia Law Review).
As background for the analysis, Part I briefly compares automated, machine-learning-based decision tools to more familiar forms of decisionmaking criteria. Part II uses the explanation requirements embedded in administrative law as a springboard to analyze the functions that explanation has conventionally been expected to perform with regard to rulemaking. Part III considers how incorporating inscrutable machine-learning-based decision tools changes the potential effectiveness of explanations for these functions. Part IV concludes by suggesting approaches that may alleviate these problems in some contexts.