AI, algorithms & data

Law enforcement and criminal justice authorities are increasingly using artificial intelligence (AI) and automated decision-making (ADM) systems. These systems are often used to profile people, ‘predict’ their actions, and assess their risk of certain behaviour, such as committing a crime, in the future. This can have devastating consequences for the people involved, who are profiled as criminals or considered a risk even though they haven’t actually committed a crime.

Predictions, profiles, and risk assessments that are based on data analysis, algorithms and AI, can lead to real criminal justice outcomes. These can include constant surveillance, repeated stop and searches, questioning, fines and arrests. These systems can also heavily influence sentencing, prosecution and probation decisions.

What is the problem?

There are fundamental flaws in how these systems are being implemented in criminal justice:

  • Discrimination and bias: AI and automated systems in criminal justice are designed, created and operated in a way that makes them predisposed to produce biased outcomes. This can stem from their purpose, such as targeting a certain type of crime or a specific area. It can also be a result of their use of biased data, which reflects structural inequalities in society and institutional biases in criminal justice and policing. As a result, these systems can reproduce and exacerbate discrimination based on race, ethnicity, nationality, socio-economic status and other grounds. These systemic, institutional and societal biases are so ingrained that it is questionable whether any AI or ADM system would produce unbiased outcomes.
  • Infringement of the presumption of innocence: Profiling people and taking action before a crime has been committed undermines the right to be presumed innocent until proven guilty in criminal proceedings. Often, these profiles and decisions are based not just on an individual’s behaviour but on factors far beyond their control. This may include the actions of people they are in contact with or even demographic information, such as data about the neighbourhood they live in.
  • Lack of transparency and routes for redress: Any system that has an influence on criminal justice decisions should be open to public scrutiny. However, technological barriers and deliberate efforts to conceal how the systems work for profit-driven reasons make it difficult to understand how such decisions are made. People are often unaware that they have been subject to an automated decision. Clear routes for challenging decisions – or the systems themselves – and for redress are also severely lacking.

These are serious issues that can seriously impact people’s lives, threaten equality and infringe fundamental rights, including the right to a fair trial.

What do we want?

We want States to prioritise rights and implement regulation that ensures AI and ADM systems in criminal justice do not cause fundamental harms.

Above all, we want States to:

  • Prohibit the use of predictive, profiling and risk assessment AI and ADM systems in law enforcement and criminal justice. Only an outright ban can protect people from the fundamental harms they cause.

For other uses of AI and ADM in criminal justice, we want States to implement a set of strict legal safeguards:

  • Bias testing: Independent testing for biases at all stages of deployment, including the design and deployment phases, must be mandatory. To make such bias testing possible, data collection on criminal justice must be improved, including data separated by race, ethnicity, and nationality.
  • Transparency: It must be made clear how a system works, how it is operated, and how it has arrived at a decision. Everyone affected by these systems and their outputs, such as suspects and defendants, must be able to understand how they work, as well as the general public.
  • Evidence of decisions: Human decision-makers in criminal justice must provide reasons for their decisions and evidence of how decisions were influenced by AI and ADM systems.
  • Accountability: A person must be told whenever an AI or ADM system has or may have impacted a criminal justice decision related to them. There must also be clear procedures for people to challenge AI and ADM decisions, or the systems themselves, and routes for redress.

What is Fair Trials doing?

To push for strong regulatory frameworks, we continue to investigate and expose how AI and ADM systems are infringing human rights in Europe.

We have produced a report, Automating Injustice, where we document and analyse case studies showing the harmful consequences of this technology.

European Union

As more and more countries turn to AI and ADM in criminal justice, it is crucial that the EU becomes a leading standard-setter. The current framework is insufficient to protect people against the harmful impacts of such systems.

We are calling on the EU to ensure that fundamental rights are placed at the heart of its future regulation on AI. The EU must ban the use of predictive, profiling and risk assessment AI and ADM systems in law enforcement and criminal justice. Strict safeguards must be introduced for all other uses.

The two committees in charge of the AI Act on behalf of the European Parliament have come out in favour of a prohibition of predictive policing against individuals – but not areas and locations. Read our response to their report.

Council of Europe

The Council of Europe is working on a new legal framework for the use of AI, with a view to producing a legally binding convention in future.

As a leading human rights organisation with 47 member states in Europe, we expect the Council of Europe to protect people from harmful uses of AI and ADM in criminal justice and uphold fundamental rights.