- The two MEPs in charge of the AI Act on behalf of the European Parliament have come out in favour of a prohibition of predictive policing.
- Predictive AI in policing and criminal justice is known to reinforce discrimination and undermine fundamental rights, often resulting in serious harm.
- Fair Trials urges all MEPs to support this ban in future discussions and votes in the European Parliament.
The two MEPs in charge of the Artificial Intelligence Act on behalf of the European Parliament have recommended that the AI Act includes a ban on predictive policing and justice systems.
Dragos Tudorache, co-rapporteur on behalf of the Civil Liberties, Justice and Home Affairs (LIBE) committee, and Brando Benifei, co-rapporteur on behalf of the Internal Market and Consumer Protection (IMCO) committee have confirmed their support for a ban on predictive policing and justice AI systems in the EU’s Artificial Intelligence Act.
Fair Trials is pleased to see the inclusion of the prohibition in the Committees’ recent report, following our call for such a ban and widespread support from rights, racial equality and lawyers groups across Europe. Fair Trials also congratulates the rapporteurs on their commitment to human rights.
Griff Ferris, Legal and Policy Officer at Fair Trials, stated:
“The MEPs in charge of the AI Act have taken a huge step towards protecting people and their rights by calling for a ban on predictive AI in policing and criminal justice in the AI Act.
“Time and time again, we’ve seen how the use of these systems exacerbates and reinforces discriminatory police and criminal justice action, feeds systemic inequality in society, and ultimately destroys people’s lives. We now call on all MEPs to stay true to their mandate to protect people’s rights by supporting and voting in favour of the ban.”
Predictive AI systems used in policing and criminal justice have been proven to reproduce and reinforce existing discrimination, which results in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned across Europe. Fair Trials has documented numerous cases demonstrating that the attempt to ‘predict’ criminal behaviour undermines fundamental rights, including the right to a fair trial, and the presumption of innocence.
Amendments to the AI Act published by members of the JURI committee (310 – 527, 528 – 746) also included prohibitions on AI used by law enforcement for profiling and predictive purposes and risk assessments. Important amendments were also introduced supporting the need for increased transparency, oversight, accountability, and effective right to redress.
Fair Trials, European Digital Rights (EDRi) and 43 other civil society organisations recently released a collective call on the EU to support a ban on predictive policing. As the IMCO-LIBE report will be discussed in the coming weeks and months, Fair Trials continues to urge MEPs to put fundamental rights first.
Norman L. Reimer. Global CEO at Fair Trials, stated:
“We cannot allow automated technology that erodes justice and perpetuates pre-existing systemic flaws to supplant individualised human decision-making. These enormously important proposals to prevent the abusive impact of AI on the most vulnerable in society present a unique opportunity for the EU to set an example on this pressing issue in criminal justice globally.”