On this page
European Parliament committees support ban on predictive policing and criminal justice AI
The two committees in charge of the AI Act on behalf of the European Parliament have come out in favour of a prohibition of predictive policing against individuals – but not areas and locations.
- Predictive policing and criminal justice AI systems are known to reinforce discrimination and undermine fundamental rights, often resulting in serious harm
- European Parliament LIBE and IMCO committees support prohibiting their use as they violate “human dignity” and hold a “particular risk of discrimination”.
- Fair Trials has pushed for this prohibition in the Act, with widespread support, and urges all MEPs to support this ban in future discussions and votes in the European Parliament.
The two European Parliament committees in charge of the Artificial Intelligence Act have recommended that the AI Act includes a ban on predictive policing and justice systems.
The IMCO-LIBE report on the AI Act, due on 11 April, has now been leaked, with a copy seen by Fair Trials. The draft report includes a ban on the use of predictive systems which are used to make “individualised risk assessments” to “assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence”.
The committees’ report also says that “Predictive policing violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination” and therefore is included “among the prohibited practices.”
Fair Trials welcomes the inclusion of the prohibition in the Committees’ recent report, following our call for such a ban and widespread support from rights, racial equality and lawyers groups across Europe. Fair Trials also congratulates the committees and rapporteurs on their commitment to human rights.
However, the prohibition does not include the use of predictive policing and justice systems on locations or areas, where these systems have also been proven to exacerbate and reinforce discrimination.
Griff Ferris, Legal and Policy Officer at Fair Trials, responded to the report:
“The European Parliament committees in charge of the AI Act have taken a huge step towards protecting people and their rights by proposing a ban on the use of predictive policing and justice AI systems against individuals in the AI Act.
“Time and time again, we’ve seen how the use of these systems exacerbates and reinforces discriminatory police and criminal justice action, feeds systemic inequality in society, and ultimately destroys people’s lives. However, the ban must also extend to include predictive policing systems that target areas or locations, which have the same effect.
“We now call on all MEPs to stay true to their mandate to protect people’s rights by supporting and voting in favour of the ban of all uses of predictive AI in policing and criminal justice.”
Predictive AI systems used in policing and criminal justice have been proven to reproduce and reinforce existing discrimination, which results in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned across Europe. Fair Trials has documented numerous cases demonstrating this discrimination, as well as how attempts to ‘predict’ criminal behaviour undermine fundamental rights, including the right to a fair trial, and the presumption of innocence.
Amendments to the AI Act published by members of the JURI committee (310 – 527, 528 – 746) also included prohibitions on AI used by law enforcement for profiling and predictive purposes and risk assessments. Important amendments were also introduced supporting the need for increased transparency, oversight, accountability, and effective right to redress.
Fair Trials, European Digital Rights (EDRi) and 43 other civil society organisations recently released a collective call on the EU to support a ban on predictive policing. As the IMCO-LIBE report will be discussed in the coming weeks and months, Fair Trials continues to urge MEPs to put people and their fundamental rights first.
Norman L. Reimer. Global CEO at Fair Trials, stated:
“We cannot allow automated technology that erodes justice and perpetuates pre-existing systemic flaws. These enormously important proposals to prevent the abusive impact of AI on the most vulnerable in society present a unique opportunity for the EU to set an example on this pressing issue in criminal justice globally.”
The full text of the amendment to the AI Act proposed in the IMCO-LIBE report is as follows:
“Proposal for a regulation
“Article 5 – paragraph 1 – point c a (new)
“(c a) the placing on the market, putting into service or use of an AI system for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behaviour of natural persons or groups of natural persons;”
“Predictive policing violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination. It is therefore inserted among the prohibited practices.”
Update: The IMCO-LIBE report has now been published and is available to download here.