Fair Trials calls for an EU Artificial Intelligence Act for fundamental rights
Fair Trials and 114 civil society organisations have launched a collective statement to call for an Artificial Intelligence Act which foregrounds fundamental rights.
The statement, drafted by European Digital Rights (EDRi), Access Now, Fair Trials, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, PICUM, and ANEC, outlines central recommendations to guide the European Parliament and Council of the European Union in amending the European Commission’s AI Act proposal.
Fair Trials Legal and Policy Officer Griff Ferris said:
“The EU’s current proposal to legislate on AI does not protect against the discrimination inherent in policing and criminal justice, and undermines the fight against racism in the EU.”
“AI systems are being used to create predictions, profiles and risk assessments that affect people’s lives in a very real way. Among many serious and severe punitive outcomes, these AI-generated predictions can lead to people, sometimes even children, being placed under surveillance, stopped and searched, arrested and even prosecuted without objective evidence of any crime, with minoritised ethnic people often the targets.”
“The EU needs to act urgently to ensure that it passes legislation that protects our fundamental rights, including the right to a fair trial.”
In the statement published today, the signatories call for:
- Prohibitions on all AI systems that pose an unacceptable risk to fundamental rights including a ban on the use of AI systems that attempt to profile and predict future criminal behaviour.
- Obligations on users of (i.e. those deploying) high-risk AI systems to facilitate accountability to those impacted by AI systems.
- Consistent and meaningful public transparency.
- Meaningful rights and redress for people impacted by AI systems.
- A cohesive, flexible and future-proof approach to the risk of AI systems.
- A truly comprehensive AI Act that works for everyone.
Artificial intelligence (AI) systems are increasingly being used in all areas of public life. However, the lack of adequate protections and regulation on the development and deployment of AI-powered technology poses a threat to our digital and human rights. In Europe, we have already witnessed the negative impact of AI when governed incorrectly. For example, we have seen how the use of predictive systems in policing and criminal justice have led to increased over-policing and over-criminalisation of racialised communities, and how poor, working-class and migrant areas are being wrongfully targeted by fraud detection systems. The use of facial recognition and similar systems have been used across Europe in ways that lead to biometric mass surveillance.
By fostering mass surveillance and amplifying some of the deepest societal inequalities and power imbalances, AI systems are putting our fundamental rights and democratic processes and values at great risk. That is why the European Union (EU) institutions’ proposal for an AI Act is a globally significant step. But the AI Act must address the structural, societal, political and economic impacts of the use of AI. This will ensure that the law is future-proof, and prioritises the protection of fundamental rights.
Fair Trials report, Automating Injustice, demonstrates how the use of AI and automated decision-making systems by law enforcement, judicial and other criminal justice authorities across Europe is reinforcing discrimination and undermining fundamental human rights, including the right to a fair trial and the presumption of innocence.