Publications

Report: Automating Injustice

Published:

Artificial intelligence (AI) and automated decision-making (ADM) systems are increasingly used by European law enforcement and criminal justice authorities to profile people, predict their supposed future behaviour, and assess their alleged ‘risk’ of criminality or re-offending in the future.

These predictions, profiles, and risk assessments can influence, inform, or result in policing and criminal justice outcomes, including constant surveillance, stop and search, fines, questioning, arrest, detention, prosecution, sentencing, and probation. They can also lead to non-criminal justice punishments, such as the denial of welfare or other essential services, and even the removal of children from their families. In Automating Injustice, we use case studies to analyse the use of these systems and their harmful impact. Based on these findings, we call for a prohibition on the use of AI and ADM by law enforcement, judicial, and other criminal justice authorities to predict, profile or assess people’s risk or likelihood of ‘criminal’ behaviour; and stringent legal safeguards for the use of all other forms of AI and ADM.

In an earlier version of this report, there was an error in the map on page 5 in relation to Ukraine. This has now been updated