News

EU Parliament votes for landmark ban on “discriminatory and unjust” predictive policing and criminal prediction systems

Article by Fair Trials
  • Today, MEPs in the European Parliament voted on the final text of the flagship AI Act, and voted to ban the use of predictive policing and criminal prediction systems. The ban is the first of its kind in Europe, and the first major ban on such systems worldwide.
  • Fair Trials has been calling for a ban on these systems since 2021, on the basis they are discriminatory, harmful, and infringe fundamental rights.
  • MEPs also voted to ban remote biometric identification systems, such as face recognition surveillance, and ensure transparency and accountability measures for other AI systems.

Today, European Parliament MEPs voted to finalise the text of the EU AI Act, a flagship legislative proposal to regulate AI based on its potential to cause harm.

MEPs in two Committees in charge of the AI Act voted on a number of amendments to the text, after months of negotiations. Among amendments that were voted on was a ban on predictive policing and criminal prediction systems used by law enforcement and criminal justice authorities in the EU.

MEPs voted for the ban, which will prohibit the use of AI systems for “making risk assessments of natural persons or groups” in order to “assess the risk of a natural person for offending or reoffending” as well as “for predicting the occurrence or reoccurrence of an actual or potential criminal offence”. This ban is the first of its kind in Europe, and the first major ban on the systems worldwide. Some cities and municipalities in the US have also implemented bans.

The use of these systems by law enforcement and criminal justice authorities has been proven to reproduce and reinforce existing discrimination, and already results in Black people, Roma and other minoritised ethnic people being disproportionately stopped and searched, arrested, detained and imprisoned across Europe.

Fair Trials has documented numerous predictive and profiling systems used in policing and criminal justice which can and have resulted in such discrimination, as well as how attempts to ‘predict’ criminal behaviour undermine fundamental rights, including the right to a fair trial and the presumption of innocence.

MEPs also voted to prohibit ‘remote biometric identification’, such as facial recognition systems, in publicly accessible spaces, as well as emotion recognition systems and the mass scraping of biometric databases. They agreed on other safeguards, including ensuring that there is improved public transparency of AI systems, by requiring them to be registered in a public database, giving people the right to an explanation of an AI system decision and effective remedies for challenges to decisions.

The AI Act will be subject to a final vote of the whole European Parliament at a plenary meeting set to be around 12 June.

Griff Ferris, Senior Legal and Policy Officer at Fair Trials, said:

“This is a landmark result. This ban will protect people from incredibly harmful, unjust and discriminatory predictive policing and criminal prediction systems.

“We’ve seen how the use of these systems repeatedly criminalises people, even whole communities, labelling them as criminals based on their backgrounds. These systems automate injustice, exacerbating and reinforcing racism and discrimination in policing and the criminal justice system, and feeding systemic inequality in society.

“The EU Parliament has taken an important step in voting for a ban on these systems, and we urge them to finish the job at the final vote in June.”

The road to a ban

Fair Trials first called for a ban in 2021 on predictive, profiling and risk assessment systems in policing and criminal justice, and has since built a coalition of more than 50 rights, technology, anti-racism and other organisations across Europe, including European Digital Rights (EDRi), Access Now, Human Rights Watch, Amnesty Tech, European Network Against Racism, the Council of Bars and Law Societies of Europe and the European Criminal Bar Association, among many others.

Following Fair Trials’ campaigning, many MEPs have also publicly expressed support for a ban. Co-rapporteur of the AI Act, Dragos Tudorache, has said:

“Predictive policing goes against the presumption of innocence… We do not want it in Europe.”

The other co-rapporteur of the Act, Brando Benifei MEP said in a LIBE Committee debate:

“Predictive techniques to fight crime also have a huge risk of discrimination, as well as lack of evidence about how accurate they actually are. We’re undermining the basis of our democracy, the presumption of innocence.”

Birgit Sippel, Member of the European Parliament (S&D, DE) and member of the Civil Liberties, Justice and Home Affairs Committee, said:

“It is crucial to acknowledge that elements of structural injustice are intensified by AI systems and we must therefore ban the use of predictive systems in law enforcement and criminal justice once and for all.  Any AI or automated systems that are deployed by law enforcement and criminal justice authorities to make behavioural predictions on individuals or groups to identify areas and people likely to commit a crime based on historical data, past behaviour or an affiliation to a particular group will inevitably perpetuate and amplify existing discrimination. This will particularly impact people belonging to certain ethnicities or communities due to bias in AI systems.”

Discrimination, surveillance and infringement of fundamental rights

The need for a prohibition is based on detailed research by Fair Trials into the use and impact of these systems across Europe, with case studies and examples set out in the report Automating Injustice: the use of artificial intelligence and automated decision-making systems in criminal justice in Europe.

Fair Trials’ report details how these systems work, including the data they use, and demonstrates that law enforcement and criminal justice data used to create, train and operate AI systems is often reflective of historical, systemic, institutional and societal discrimination which result in racialised people, communities and geographic areas being over-policed and disproportionately surveilled, questioned, detained and imprisoned.

Fair Trials created an ‘example’ predictive tool to allow people to see how these systems use information to profile them and predict criminality.

Prohibition text and other amendments

The full text of the prohibition is below, within Article 5 of the Act, which comprises of a list of ‘prohibited practices’:

“(da) the placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons;”

Fair Trials also supported several other amendments to the Act, which it was also pleased to see pass:

  • require public transparency of ‘high-risk’ AI systems on a public register;
  • give people the right to an explanation of an AI system decision;
  • give people meaningful ability to challenge AI systems and obtain effective remedies;
  • prohibit remote biometric identification (such as face recognition surveillance);
  • ensure that predictive AI systems in migration are considered high risk;
  • ensure that ‘high-risk’ systems have accessibility requirements for people with disabilities; and
  • widen the definition of AI to cover the full range of AI systems which have been proven to impact fundamental rights.