News

Fair Trials calls for ban on the use of AI to 'predict' criminal behaviour

Article by Fair Trials
  • Artificial intelligence systems are reinforcing discrimination and undermining fundamental rights, including the right to the presumption of innocence.
  • Systems already in use across Europe are using biased data and racist stereotypes to decide whether people are at risk of future criminal behaviour.

Fair Trials has called on the EU to ban the use of artificial intelligence (AI) and automated decision-making (ADM) systems to predict, profile or assess people s risk or likelihood of criminal behaviour. Our new report, Automating Injustice demonstrates how the use of such systems by law enforcement, judicial and other criminal justice authorities across Europe is reinforcing discrimination and undermining fundamental human rights, including the right to a fair trial and the presumption of innocence.

Griff Ferris, Legal and Policy Officer at Fair Trials, said:

“The use of AI and automated systems to predict people s future behaviour or alleged criminality is not just subject matter for dystopian futuristic films but is currently an existing operational strategy of police and criminal justice authorities across Europe. These systems are being used to create predictions, profiles and risk assessments that affect people s lives in a very real way. Among other serious and severe outcomes, they can lead to people, sometimes even children, being placed under surveillance, stopped and searched, questioned, and arrested even though no actual crime has been committed.”

“Putting safeguards or oversight in place is not sufficient. The EU must ban the use of AI and automated systems that attempt to profile and predict future criminal behaviour. Without an outright ban, the discrimination that is inherent in criminal justice systems will be reinforced and the fundamental rights of millions of Europeans will be undermined.”

Discrimination

The law enforcement and criminal justice data used to create, train and operate AI and ADM systems used in criminal justice is reflective of systemic, institutional and societal biases which result in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned across Europe. These biases are so fundamental and ingrained that it is questionable whether any such system would not produce such biased and discriminatory outcomes.

Case studies

The Automating Injustice report includes evidence and analysis of real-life examples of how these systems are impacting people and undermining fundamental rights. They include:

  • ProKid, a risk assessment used by Dutch police since 2011, attempts to predict whether children as young as 12 are at risk of criminality, using a range of factors that even include whether they have been a victim of or witness to a crime. Children are also judged on the basis of criminal behaviour by family, friends or other associates. Once a child has a ProKid profile, it is the first step in a pipeline of automated risk assessments, which can lead to serious criminal justice penalties and other related outcomes. An evaluation of ProKid found that a third of children had their risk levels misassigned.
  • Another Dutch system, the Top600, attempts to predict and profile people most at risk of committing violence in Amsterdam, in order to punish quickly and severely’. Young men profiled by the Top600 are constantly followed and harassed by police, subject to surveillance, fines, arrests, and police raids on their homes. The younger brothers and sisters of those profiled are also visited by police. One-third of those profiled by the Top600 are Dutch-Moroccan men.
  • The National Data Analytics Solution, a risk assessment and crime prediction tool used by West Midlands Police in the UK, uses stop and search data in its predictive models despite the fact that stop and search is a policing strategy that is consistently used in a discriminatory manner in the United Kingdom (as well as across Europe). Like many other similar systems, it also uses uncorroborated police intelligence reports as an indicator of criminality. It intends to use data from health services, welfare and benefits authorities, educational authorities, and local government, to assist in its predictions .
  • Delia, a crime analysis and prediction system used by police in Italy, uses ethnicity data, as well as uncorroborated police intelligence reports as an indicator of criminality police intelligence reports, to create predictions and profiles of alleged offenders and crime locations.
  • HART, another risk assessment system used in England and Wales, decides whether people should be prosecuted or allowed onto a rehabilitation programme. The system uses financial information and discriminatory commercial marketing profiles and area codes, which can be a proxy for race or socio-economic status. HART s creators have even admitted the potential for its outcomes to disproportionately impact deprived communities and to perpetuate or amplify existing patterns of offending.

Read the report here.