Fair Trials is calling on the European Union to radically improve its new legislation on artificial intelligence to include meaningful safeguards to protect against discrimination, prevent uses of AI which undermine the presumption of innocence, and address significant loopholes which render any safeguards meaningless.
Griff Ferris, Legal and Policy Officer said:
“The EU s new AI legislation needs radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice.”
“The new legislation lacks any safeguards against discrimination, while the wide-ranging exemption for safeguarding public security completely undercuts what little safeguards there are in relation to criminal justice.”
“The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial, which includes restricting the use of systems that attempt to profile people and predict the risk of criminality.”
Fair Trials has serious concerns about the use of AI and automated decision-making systems in law enforcement and criminal justice. AI systems are used by law enforcement and criminal justice authorities to predict and profile people s actions in order to justify surveillance, stop and search, questioning, arrest and detention, as well as other non-criminal justice punishments such as the denial of welfare, housing, education or other essential services. In doing so, these systems engage and infringe fundamental rights, including the right to a fair trial, privacy and data protection rights, as well as discrimination based on race, socio-economic status or class and nationality.
We welcome the fact that the EU is taking a much-needed legislative approach to regulate and limit the use of AI, and even more so that it recognises that the use of AI in criminal justice is high-risk. However, this legislation does not go nearly far enough to prevent certain fundamentally harmful uses and provide strict, mandatory safeguards, which will have damaging consequences across Europe for a generation.
There are no protections in the proposed legislation against the significant threat of discrimination in AI and automated decision-making systems.
Fair Trials is calling for significant improvements to the legislation to:
- Restrict the use of AI and automated decision-making systems to predict, profile and assess people s risk of criminality, generate reasonable suspicion and justify law enforcement action, arrest and pre-trial detention. The use of these systems undermines the presumption of innocence and must not be allowed.
- Make rigorous bias testing mandatory for all AI and automated decision-making systems used in criminal justice, and no system should be allowed to operate without it. The criminal justice data used to create, train and operate these systems is reflective of systemic, institutional and societal biases which result in Black people, Roma and other ethnic minorities being aggressively overpoliced, detained and imprisoned across Europe. As a result, these systems have been shown to directly generate and reinforce biased and discriminatory criminal outcomes. These issues are so fundamental that it is questionable whether any such system would not produce such outcomes, but a rigorous testing regime is the bare minimum required.
- Ensure that individuals can challenge criminal justice decisions that are assisted by AI. AI and automated decision-making systems should be open to public scrutiny in the same way that all decision-making processes by public entities should be. Commercial interests and technological design should never be a barrier to transparency and any criminal justice decisions assisted or influenced by AI or automated decision-making systems must be fully open to meaningful scrutiny and challenge by any individual subject to it.