- EU agrees on ban for some ‘individual’ predictive policing and crime prediction systems in the AI Act.
- Geographic or area-targeting ‘predictive’ systems excluded from the ban with suggestions that caveats or exemptions in the text will further reduce its scope.
- Fair Trials first called for a ban on ‘predictive’ and profiling systems in policing and criminal justice in 2021 and led the campaign for a ban alongside European Digital Rights (EDRi) and 50+ civil society organisations.
After months of negotiations, the European Union has concluded the opaque and convoluted ‘trilogue’ negotiation process on the AI Act. The Act will include a partial version of the ban on ‘predictive’ policing and crime ‘prediction’ systems that Fair Trials has been campaigning for since 2021.
The actual text of the finalised Act remains unpublished so a full analysis, including precise details of the ban on predictive policing and crime prediction systems will have to wait. The final text may not be published for several months.
The AI Act as agreed contains a list of banned AI applications that pose an ‘unacceptable risk’. These include partial bans on real-time and post-event biometric identification (such as face recognition surveillance), biometric categorisation systems, social scoring, behavioural manipulation and emotion recognition in workplaces or education.
The partial ban on ‘predictive’ policing and crime prediction systems is significantly weaker than the version voted for by the European Parliament in June. The final ban prohibits some systems that make predictions about individuals based on “personality traits or characteristics”. However, it does not ban geographic crime prediction systems used widely by police forces across Europe, despite evidence that these systems reinforce existing racism and discrimination in policing.
Even this partial ban is subject to further caveats and exemptions. These loopholes, which may make the partial ban even weaker, will only be clear once the full text is published.
As an indication, a previous version of the prohibition in an early Council ‘compromise’ was so vague and broadly defined that it would not have prohibited any of the discriminatory and unjust systems currently in use in Europe. During the trilogue process, EU member states in the European Council pushed hard to remove or water down the AI Act and many of the prohibitions.
The AI Act contains law enforcement exemptions for many of the banned uses, including real-time and post-event biometric identification. Law enforcement authorities are also exempt throughout the Act from transparency requirements, such as the requirement to publish details of the ‘high-risk’ systems they are using in a public database.
The AI Act will now also contain a ‘national security’ exemption, which prevents the entire Act, including bans and transparency requirements, from applying in the context of national security. ‘National security’ exemptions, which are historically vague and broadly defined, will further limit the actual impact of the Act, especially in a law enforcement context.
The AI Act will apply two years after it enters into force, with the bans applying six months after it enters into force.
Senior Legal and Policy Officer at Fair Trials, Griff Ferris, said:
“It is disappointing that the European Union has missed this opportunity to protect people by fully banning ‘predictive’ policing and crime ‘prediction’ systems in the AI Act. Time and again, these systems have been proven to automate injustice, reinforce discrimination and undermine fundamental rights.
The inclusion of a partial prohibition is a small victory and goes some way to acknowledging the harm of these systems. However, it is further limited by exemptions for law enforcement authorities from basic transparency requirements as well as for ‘national security’, which demonstrate how the Act ultimately panders to police, government and industry. Much more needs to be done to meaningfully protect people and their rights.”