On this page
EU Parliament approves landmark AI law
The AI Act includes bans on predictive policing and face recognition surveillance, and transparency and accountability requirements
- Yesterday, MEPs in the European Parliament approved the flagship Artificial Intelligence (AI) Act, and voted to ban the use of predictive policing and criminal prediction systems. The ban is the first of its kind in Europe, and the first major ban on such systems worldwide.
- Fair Trials has been calling for a ban on these systems since 2021, on the basis they are discriminatory, harmful, and infringe fundamental rights.
- MEPs also voted to ban remote biometric identification systems, such as face recognition surveillance, and ensure transparency and accountability measures for other AI systems.
Yesterday, European Parliament MEPs voted to approve the text of the EU AI Act, a flagship legislative proposal to regulate AI based on its potential to cause harm.
In a plenary vote of the entire European Parliament, MEPs voted on the text of the AI Act as finalized by committees in May, and on last-minute amendments, after months of negotiations.
The finalized text includes a ban on predictive policing and criminal prediction systems used by law enforcement and criminal justice authorities in the EU, which Fair Trials first called for in 2021.
MEPs voted for the ban, which will prohibit the use of AI systems for “making risk assessments of natural persons or groups” in order to “assess the risk of a natural person for offending or reoffending” as well as “for predicting the occurrence or reoccurrence of an actual or potential criminal offence”. This ban is the first of its kind in Europe, and the first major ban on the systems worldwide. Some cities and municipalities in the US have also implemented bans.
The use of these systems by law enforcement and criminal justice authorities has been proven to reproduce and reinforce existing discrimination, and already results in Black people, Roma and other minoritised ethnic people being disproportionately stopped and searched, arrested, detained and imprisoned across Europe.
Fair Trials has documented numerous predictive and profiling systems used in policing and criminal justice which can and have resulted in such discrimination, as well as how attempts to ‘predict’ criminal behaviour undermine fundamental rights, including the right to a fair trial and the presumption of innocence.
MEPs also voted to prohibit ‘remote biometric identification’, such as facial recognition systems, in publicly accessible spaces, as well as emotion recognition systems and the mass scraping of biometric databases. They agreed on other safeguards, including ensuring that there is improved public transparency of AI systems, by requiring them to be registered in a public database, giving people the right to an explanation of an AI system decision and effective remedies for challenges to decisions.
The text of the Act will now be subject to negotiations between the European Parliament, the European Commission, and Member State representatives in the European Council, known as ‘trilogue’.
Griff Ferris, Senior Legal and Policy Officer at Fair Trials, said:
“This is a historic result. This ban will protect people from incredibly harmful, unjust and discriminatory predictive policing and criminal prediction systems.
“We’ve seen how the use of these systems repeatedly criminalises people, even whole communities, labelling them as criminals based on their backgrounds. These systems automate injustice, exacerbating and reinforcing racism and discrimination in policing and the criminal justice system, and feeding systemic inequality in society.
“The European Parliament has made clear that these systems must not be used in Europe, and follows community-led initiatives to ban these technologies in the US.”
Prohibition text and other amendments
The full text of the prohibition is below, within Article 5 of the Act, which comprises of a list of ‘prohibited practices’:
“(da) the placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons;”
Fair Trials also supported several other amendments to the Act, which it was also pleased to see in the final text of the Act, including:
- prohibit remote biometric identification (such as face recognition surveillance);
- require public transparency of ‘high-risk’ AI systems on a public register;
- give people the right to an explanation of an AI system decision;
- give people meaningful ability to challenge AI systems and obtain effective remedies;
- ensure that predictive AI systems in migration are considered high risk;
- ensure that ‘high-risk’ systems have accessibility requirements for people with disabilities; and
- widen the definition of AI to cover the full range of AI systems which have been proven to impact fundamental rights.