Publications

EU Commission ‘AI Act’ Consultation: Fair Trials' Response

Published:

We welcome the fact that the EU is taking a much-needed legislative approach to regulate and limit the use of artificial intelligence (AI), and that it recognises that the use of AI in law enforcement and criminal justice can have serious implications for fundamental rights.

However, while recognising the risks to some extent, the AI Act does not go nearly far enough to prevent certain fundamentally harmful uses, particularly in relation to law enforcement and criminal justice, which will have damaging consequences across Europe for generations.

In order to do this meaningfully, the Act must prohibit AI used by law enforcement, and judicial and criminal justice authorities used to predict, profile or assess people’s risk or likelihood of ‘criminal’ behaviour, generate reasonable suspicion, and justify law enforcement or criminal justice action, such as surveillance, stop and search, arrest, detention, pre-trial detention, sentencing and probation. No amount of safeguards, short of a full statutory prohibition, will protect against these fundamental harms effectively.

In the absence of a full prohibition, and to prevent additional harms, uphold the rule of law and safeguard justice systems, there are several bare minimum safeguards and requirements that can be adopted to lessen the fundamental rights impact of these AI systems. In particular:

  • AI systems used in law enforcement and criminal justice contexts must be subject to mandatory, independent bias testing, but the feasibility of such testing depends on the availability of criminal justice data that is severely lacking in the EU;
  • The AI Act must require more openness, transparency, and explainability of AI systems and their use, the decisions that are made, and significantly, must focus not just on ensuring transparency to the users of the systems, but also to those individuals impacted by AI or AI-assisted decisions;
  • It is crucial that there are effective avenues for individuals to challenge not just the AI decisions, but also the system itself; and
  • The AI Act includes several exemptions for uses of AI from safeguards. This means that there is a lack of protection against a technology that can engage and infringe fundamental rights, including the right to a fair trial, privacy and data protection rights, as well as result in discrimination based on race, socio-economic status or class and nationality.