November, 2016

Artificial Intelligence “Judges” Predict Case Outcomes

British computer scientists at University College London have developed artificial intelligence software potentially capable – with a 79% accuracy rate -- of weighing legal evidence and determining questions of right and wrong.  The scientists used an algorithm to review data from 584 torture and degrading treatment cases in the European Court of Human Rights.  The Court was established in 1959 by the European Convention on Human Rights and rules “on the applications of individuals or sovereign states alleging violations of the civil and political rights” set forth in the Convention.  Most applications filed with the Court are filed by individuals.

Judgments in the court are required to follow a specified format, which the scientists found made them “particularly suitable for a text-based analysis.”  The judgments must provide: “an account of the procedure followed on the national level, the facts of the case, a summary of the submissions of the parties, which comprise their main legal arguments, the reasons in point of law articulated by the Court and the operative provisions.”  Only cases decided on the merits, and not procedurally dismissed in early stages of the application, could be examined; dismissed applications are not reported, so text-based analysis of those applications is not possible.

The artificial intelligence “judge,” using textual content of the case decisions, reached the same result as the Court of Human Rights in 79% of the cases examined.  The lead researcher in the study, Dr. Nikolaos Aletras, was quoted in a recent article as saying, “We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes.”  The authors found that “the formal facts of a case are the most important predictive factor” and stated that this conclusion was consistent with “the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of facts.”

The authors noted that there are “two opposing ways of making sense of judicial-decision-making: legal formalism and legal realism.”  Formalists were described as having provided “a legal model of judicial decision-making, claiming that the law is rationally determinate: judges either decide cases deductively, by subsuming facts under formal legal rules or use more complex legal reasoning than deduction whenever legal rules are insufficient to warrant a particular outcome.”  Realists were described as having “criticized formalist models, insisting that judges primarily decide appellate cases by responding to the stimulus of the facts of the case, rather than on the basis of legal rules or doctrine, which are in many occasions rationally indeterminate.” The authors found that a “rather robust correlation between the outcomes of cases and the text corresponding to fact patterns contained in the relevant subsections (in the Court’s judgments) coheres well with other empirical work on judicial decision-making in hard cases and backs basic legal realist intuitions.”

The authors described their study as the first study on predicting case outcomes utilizing textual information, in contrast to earlier studies, which focused more on non-textual information, such as “the nature and the gravity of the crime or the preferred policy position of each judge.”

Sources:  Chris Johnston, “Artificial intelligence ‘judge’ developed by UCL computer scientists,” guardian.com, October 23, 2016:  https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.  Abstract: https://peerj.com/articles/cs-93/Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preoµiuc-Pietro, and Vasileios Lampos, “Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective,” peerj.com, October 24, 2016: https://peerj.com/ articles/cs-93.pdf

by Neil Leithauser
Associate Editor