I would like to help today and donate

Next
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Next
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
CLOSE
NEWS

MEPs approve report opposing predictive AI and calling for ban on judicial AI and biometric surveillance

  • MEPs have approved the European Parliament’s Civil Liberties, Justice and Home Affairs (LIBE) Committee report on AI in criminal matters, which calls for bans on certain uses of AI in law enforcement and criminal justice.
  • The LIBE report recognises that AI systems used in criminal justice can cause and exacerbate discrimination, opposes the use of AI to ‘predict’ future criminal behaviour, calls for a ban on the use of AI to make judicial decisions and a ban on biometric mass surveillance.
  • Fair Trials congratulates MEPs on this step towards preventing discrimination and protecting fundamental rights in the EU, including the right to a fair trial and the right to privacy. In September, Fair Trials called for a ban on the use of AI and automated decision-making systems by law enforcement and judicial and criminal justice authorities to predict, profile or assess people’s risk or likelihood of ‘criminal’ behaviour.

Members of the European Parliament (MEPs) have taken a strong stance opposing discrimination and protecting fundamental rights in the technological age by voting to approve LIBE’s landmark AI in criminal matters report.

A strong majority of MEPs voted for the report in its original form (377 to 248), rejecting several harmful amendments that would have weakened the opposition to predictive AI and the call for a ban on biometric surveillance.  It marks a clear statement of intent by the European Parliament for EU regulation of AI that protects people against automated injustice and discrimination, prioritises fundamental rights, and prohibits inherently harmful uses, particularly the use of AI to ‘predict’ future criminal behaviour and profile people.

Griff Ferris, Legal and Policy Officer at Fair Trials, said:

“This is a landmark result for fundamental rights and non-discrimination in the technological age. MEPs have made clear that police and criminal justice authorities in Europe must no longer be allowed to use AI systems which automate injustice, undermine fundamental rights and result in discriminatory outcomes.

“This is a strong statement of intent that the European Parliament will protect Europeans from these systems, and a first step towards a ban on some of the most harmful uses, including the use of predictive and profiling AI, and biometric mass surveillance.

“We are very pleased that a significant majority of MEPs rejected the amendments to the LIBE report, taking a stand against AI and automated decision-making systems which reproduce and reinforce racism and discrimination, undermine the right to a fair trial and the presumption of innocence, and the right to privacy.”

Fair Trials recently documented how the use of AI and automated decision-making systems are automating injustice and discrimination, undermining the presumption of innocence, and leading to serious and severe outcomes for people in the EU. Predictive, profiling and ‘risk’ assessment systems are leading to people, including children, being placed under surveillance, stopped and searched, questioned, and even arrested based on decisions made by algorithms, with deeply damaging life consequences.

LIBE Committee report – key statements and calls

The LIBE Committee report supported by MEPs includes several strong statements and calls for prohibitions on the use of AI in criminal matters. The report:

  • “calls for a ban on the use of AI and related technologies for proposing judicial decisions”;
  • “opposes (…) the use of AI by law enforcement authorities to make behavioural predictions on individuals or groups on the basis of historical data and past behaviour, group membership, location, or any other such characteristics, thereby attempting to identify people likely to commit a crime”;
  • “calls on the Commission, therefore, to implement (…) a ban on any processing of biometric data, including facial images, for law enforcement purposes that leads to mass surveillance in publicly accessible spaces”;
  • “underlines the fact that many algorithmically driven identification technologies currently in use disproportionately misidentify and misclassify and therefore cause harm to racialised people, individuals belonging to certain ethnic communities, LGBTI people, children and the elderly, as well as women”;
  • “recalls that the inclusion in AI training data sets of instances of racism by police forces in fulfilling their duties will inevitably lead to racist bias in AI-generated findings, scores, and recommendations”;  
  • “calls for algorithmic explainability, transparency, traceability and verification as a necessary part of oversight, in order to ensure that the development, deployment and use of AI systems for the judiciary and law enforcement comply with fundamental rights, and are trusted by citizens, as well as in order to ensure that results generated by AI algorithms can be rendered intelligible to users and to those subject to these systems”; and
  • “considers it vital that the application of AI systems in the context of criminal proceedings should ensure respect for the fundamental principles of criminal proceedings, including the right to a fair trial, the principle of the presumption of innocence and the right to an effective remedy, as well as ensuring monitoring and independent control of automated decision-making systems."

Support from MEPs

During Monday’s debate on the report in the plenary, the rapporteur of the LIBE report Petar Vitanov MEP stated that “we need to draw clear red lines for AI-based systems that violate fundamental rights.” Drawing on the findings of Fair Trials’ Automating Injustice report, he added:

"Predictive profiling and risk assessment AI and automated decision-making systems target individuals and profile them as criminal, resulting in serious criminal justice and civic outcomes and punishments before they have carried out the alleged actions for which they are being profiled. In essence, the very purpose of the systems undermines the fundamental right to be presumed innocent."

Reaffirming the numerous case studies documented by Fair Trials, MEP Kim van Sparrentak illustrated the distressing reality for people already impacted by predictive and profiling systems in Europe:

“Imagine waking up one day with the police barging into your house after AI has flagged you as a suspect. Then it's up to you to prove you're innocent. It is you versus the computer. And the myth that a calculation is more ethical than a human is dangerous, especially where decisions impact people's lives.“

Other crucial interventions from Monday’s debate include:

  • Karen Melchior MEP: “It's not all algorithms or artificial intelligence that are problematic, but predictive profiling and risk assessment, artificial intelligence and automated decision-making systems: they are weapons of math destruction and they are as dangerous for our democracy as nuclear bombs are for living creatures and life. They will destroy the fundamental rights of each citizen to be equal before the law and in the eye of our authorities.”
  • Brando Benifei MEP: “Predictive techniques to fight crime also have a huge risk of discrimination, as well as lack of evidence about how accurate they actually are. We're undermining the basis of our democracy, the presumption of innocence. No dataset will be enough to ensure that this type of practice from AI systems should be adopted because there won't be the necessary constitutional and fundamental rights guarantees.”
  • Miroslav Radačovský MEP: “I've been a judge for many years and I think we have to be very careful if we use AI in legal decisions, in that context the human factor should decide on guilt or innocence, you can only have humans taking decisions in guilt and innocence or sentences, you can’t have algorithms used… this is absolutely essential.

If you are a journalist interested in this story, please call the media team on +44 (0) 7749 785 932 or email [email protected]

Keep up to date

Receive updates on our work and news about Fair Trials globally

Some activities in the following sections on this website are funded by the European Union’s Justice Programme (2014-2020): Legal Experts Advisory Panel, Defence Rights Map, Case Law Database, Advice Guides, Resources, Campaigns, Publications, News and Events. This content represents the views of the authors only and is their sole responsibility. It cannot be considered to reflect the views of the European Commission or any other body of the European Union. The European Commission does not accept any responsibility for use that may be made of the information it contains.