Algorithms and Judicial Decision-Making

In the age of technological advancement, most jobs nowadays can (and are) being automated. Machines and algorithms have proven to lower the risk of error and heighten efficiency in most tasks, diminishing the margin of error that comes with being a member of the human race; after all, to err is human. And that is the reason why courts are considering introducing risk-assessment algorithms and have them, with time, take over the sentencing process, completely replacing human judges. However, human judgement cannot and should not be replaced in the criminal sentencing process, and humans should retain their place as the final arbiters at the apex of any legal system.

Why and how those algorithms were designed:

The goal of the criminal justice system is to deliver justice by punishing the guilty and protecting the innocent as well as lowering crime rates by incapacitating offenders, punishing and rehabilitating them, while deterring potential criminals. However, it was found that human judgment may create disparities and bias, and many factors necessitated the search for a solution. Those factors include:

  1. Bias: A person belonging to a minority that a judge may have a prejudice against may receive harsher judgment (racism in court), such as: people of colour, religious groups and the LGBTQ+ community.
  2. Sympathy: for example, women are generally treated more with more lenience in court and are three times less likely to be found guilty than men; even so, they are given lighter sentences compared to men who commit the same crime in the exact same circumstances. People who have children under 18 are also tend to be treated differently and with more sympathy in court, regardless of the severity of the crime.
  3. Rise of political affiliations and polarisation: this is a fact our world is currently facing. More problems in the world mean two opposing sides. This idea is not arbitrary, but its implementation in the legal system is. 

Because of those reasons, lawmakers started searching for a solution, and algorithms were found to be, on the surface, a perfect one. After all, algorithms have no incentive, and they are neither politically biased nor race-based. So we can depend on them to form accurate judgements, right? Well, not exactly. But before we understand why algorithms are not a proper solution to the problem, we have to understand how these algorithms work.

Criminal Justice algorithms may be designed in one of two ways. 

The first would be through a “Rulebook” method, where the algorithm in question would act as a calculator, adding and subtracting points, and the final sum would determine the final sentence. 

This looks like ‘Add five points if the crime was committed at night’, or ‘Subtract 15 points if the person was acting in self-defence’.

The second way is the risk-assessment algorithm, which is a statistically-based algorithm designed to assess the risk that a given defendant will commit a crime after release. This algorithm is based on patterns and is a form of artificial intelligence that is given samples of previous cases and then “learns” to form its own judgement.

An example of this would be COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm designed by Northpoint Inc to predict recidivism. When the defendants are in jail, they respond to a COMPAS questionnaire. Their answers are then given to the COMPAS software which generates several scores including predictions of “Risk of Recidivism” and “Risk of Violent Recidivism.”

Why algorithms can’t make valid sentences:

The rulebook algorithm:

The problem with the first type of algorithm is obvious: it is simply not inclusive enough. No matter how many variables you can program into a machine, there will always be a variable that is unique to each case; no algorithm would be able to address each and every individual case. And, as the inputs and outputs of each case are as unique as the case itself, a simple “adding and subtracting” calculator would not be able to do the case justice.

The pattern-based algorithm:

The problem with pattern-based algorithms is that the data on which they are based may already be biased. Researches have shown a real threat of unintentional discrimination, as even algorithms that are designed in a fair and lawful way can amplify discrimination and racism. Even if a variable such as race is taken out of the sentencing algorithm, the system could still cluster a seemingly random group of people based on other aspects, such as place of residence or IP address, which would point to the same minority groups. If an algorithm found, for example, that low income was correlated with high recidivism, it would leave you none the wiser on whether low income actually caused crime. But this is precisely the risk assessment tools hold: they turn correlative insights into causal scoring mechanisms.

Algorithms can malfunction:

We would be increasing our dependence on machines that, in the end, have the capacity to malfunction or behave in unpredictable ways.

As with all technology, this can lead to unintended consequences that may go far beyond anything the designers ever envisioned.

 “Take the 2010 “Flash Crash” of the Doe Jones Industrial Average Index. The action of algorithms helped create the index’s single biggest decline in its history, wiping nearly 9% off its value in minutes, but the algorithms that amplified the problems didn’t make a mistake, following their own logic in a way that created a downward spiral for the market.” 

Taken from https://theconversation.com/algorithms-have-already-taken-over-human-decision-making-111436

Now imagine such a situation in court, this time, with an increased dependence on automation in deciding the fate of a human’s life. It isn’t the stock market that is at stake, it is the possible mistaken sentencing of human life. 

Even if algorithms made valid sentences, we would be removing accountability from court:

Let us, for the sake of this article, assume that we found a way for algorithms to generally form a logical verdict for a given case. Giving algorithms’ decision-making powers over human cases will raise a fundamental issue of accountability, or holding somebody responsible for mistakes made in sentencing.

What does this do? 

You no longer know who to hold accountable for mistakes in sentencing and conviction. How can you hold an algorithm accountable for its mistakes? It’s like holding a car accountable for a car crash- you don’t blame the car; you blame the driver. However, who do you blame if the driver wasn’t able to steer the wheel, how do you blame a judge that didn’t have a say in the sentencing? 

Why is this important?

  • It allows having a programme as a scapegoat for bias and systemic discrimination.
  • It doesn’t allow the case to be revisited, as a programme would give you the same output again and again.

Alternative solutions:

  • Allowing people to file lawsuits against the judges.
  • Selecting a diverse panel of judges, so different contextualisations and perspectives can be proposed to solve complex cases.

At the end of the day, even by implementing the proposed solutions, systemic injustices remain and bias may even be amplified. Those poorly judged by a mistake of calculation have no way to get the justice they deserve; and that is why, in practical terms, humans should ultimately choose what they believe to be a morally superior decision.

This Post Has One Comment

  1. Leena Taha

    The key response for most of raised questions especially in issues related to justice is to be a human with values, logical judgment and sense of humanity…

    Technology is sometimes the problem .. it is replacing human being roles !! Worse yet to come

Leave a Reply