An Estonian team is designing an artificially-intelligent (AI) agent to adjudicate claims of €7,000 or less, with the aim of clearing case backlog.
The pilot version will focus on contract disputes: An algorithm will analyze uploaded documents to reach an initial decision, which can then be appealed to a human judge.
This is but one of many examples of how AI is increasingly being incorporated into adjudicatory or adjudicatory-adjacent processes, ranging from the relatively minor to the most significant determinations.
For example, DoNotPay, an AI-based chatbot originally designed to assist with parking ticket appeals, now offers a panoply of legal services, including helping individuals report suspected discrimination, request maternity leave, and seek compensation for transit delays.
Companies rely on automated processes to resolve customer complaints,
cities depend on automated systems to generate citations for minor traffic violations,
police departments engage data mining systems to predict and investigate crime,
and domestic judges employ algorithms throughout the criminal justice process.
Meanwhile, militaries are incorporating increasingly autonomous systems in their targeting and detention decisionmaking structures.
Although AI is already of use to litigants,
to legal practitioners,
and in predicting legal outcomes,
we must be cautious and deliberate when incorporating AI into the common law judicial process.
As will be discussed in Part I, human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing often required of human judges. Nor will “cyborg justice”
—hybrid human–AI judicial systems that would attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea.
Part II notes the benefits of such systems and outlines how teaming may create new overtrust, undertrust, and interface design problems, as well as second-order, structural side effects. Part III explores one such side effect of hybrid human–AI judicial systems, which I term “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. By augmenting the common law’s extant conservative bent, hybrid human–AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy.
Cyborg justice systems are proliferating, but their structure has not yet stabilized. We now have a bounded opportunity to design systems that realize the benefits and mitigate the issues associated with incorporating AI into the common law adjudicatory process.