I would like to help today and donate

Next
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Next
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
CLOSE
Publication

EU Commission ‘AI Act’ Consultation: Fair Trials' Response

August 13, 2021 - European Commission
Download PDF

We welcome the fact that the EU is taking a much-needed legislative approach to regulate and limit the use of artificial intelligence (AI), and that it recognises that the use of AI in law enforcement and criminal justice can have serious implications for fundamental rights.

However, while recognising the risks to some extent, the AI Act does not go nearly far enough to prevent certain fundamentally harmful uses, particularly in relation to law enforcement and criminal justice, which will have damaging consequences across Europe for generations.

In order to do this meaningfully, the Act must prohibit AI used by law enforcement, and judicial and criminal justice authorities used to predict, profile or assess people’s risk or likelihood of ‘criminal’ behaviour, generate reasonable suspicion, and justify law enforcement or criminal justice action, such as surveillance, stop and search, arrest, detention, pre-trial detention, sentencing and probation. No amount of safeguards, short of a full statutory prohibition, will protect against these fundamental harms effectively.

In the absence of a full prohibition, and to prevent additional harms, uphold the rule of law and safeguard justice systems, there are several bare minimum safeguards and requirements that can be adopted to lessen the fundamental rights impact of these AI systems. In particular:

  • AI systems used in law enforcement and criminal justice contexts must be subject to mandatory, independent bias testing, but the feasibility of such testing depends on the availability of criminal justice data that is severely lacking in the EU;
  • The AI Act must require more openness, transparency, and explainability of AI systems and their use, the decisions that are made, and significantly, must focus not just on ensuring transparency to the users of the systems, but also to those individuals impacted by AI or AI-assisted decisions;
  • It is crucial that there are effective avenues for individuals to challenge not just the AI decisions, but also the system itself; and
  • The AI Act includes several exemptions for uses of AI from safeguards. This means that there is a lack of protection against a technology that can engage and infringe fundamental rights, including the right to a fair trial, privacy and data protection rights, as well as result in discrimination based on race, socio-economic status or class and nationality.

If you are a journalist interested in this story, please call the media team on +44 (0) 7749 785 932 or email [email protected]

Keep up to date

Receive updates on our work and news about Fair Trials globally

Some activities in the following sections on this website are funded by the European Union’s Justice Programme (2014-2020): Legal Experts Advisory Panel, Defence Rights Map, Case Law Database, Advice Guides, Resources, Campaigns, Publications, News and Events. This content represents the views of the authors only and is their sole responsibility. It cannot be considered to reflect the views of the European Commission or any other body of the European Union. The European Commission does not accept any responsibility for use that may be made of the information it contains.