Artificial intelligence (AI), data and criminal justice

Law enforcement and criminal justice authorities are increasingly using artificial intelligence (AI) and automated decision-making (ADM) systems in their work.

These systems can be used to profile people as criminal, ‘predict’ their actions, and assess their risk of certain behaviour, such as committing a crime, in the future. This can have devastating consequences for the people involved, if they are profiled as criminals or considered a ‘risk’ even though they haven’t actually committed a crime.

These criminal ‘prediction’ systems or ‘predictive’ policing are no longer confined to the realms of science fiction. These systems are being used by law enforcement around the world. And predictions, profiles, and risk assessments that are based on data analysis, algorithms and AI, often lead to real criminal justice outcomes. This can include monitoring or surveillance, repeated stop and search, questioning, fines and arrests. These systems can also heavily influence prosecution, sentencing and probation decisions

‘Predictive’ policing & criminal ‘prediction’ systems

Law enforcement and criminal justice authorities are increasingly using big data, algorithms and artificial intelligence (AI) to profile people and ‘predict’ whether they are likely to commit a crime.

‘Predictive’ policing and criminal ‘prediction’ systems have been proven time and time again to reinforce discrimination and undermine fundamental rights, including the right to a fair trial and the presumption of innocence. This results in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned across Europe.

For example, in the Netherlands, the ‘Top 600’ list attempts to ‘predict’ which young people will commit certain crimes. One in three of the ‘Top 600’ – many of whom have reported being followed and harassed by police –  are of Moroccan descent. In Italy, a predictive system used by police called Delia includes ethnicity data to profile and ‘predict’ people’s future criminality. Other systems seek to ‘predict’ where crime will be committed, repeatedly targeting areas with high populations of racialised people or more deprived communities.

Only an outright ban on these systems can stop this injustice. We have been campaigning for a prohibition in the European Union’s Artificial Intelligence Act (AI Act), and other initiatives at international and national level.

Will 'predictive' systems profile you as a criminal?

We have created an example ‘predictive’ policing and criminal ‘prediction’ tool. Find out if you could be profiled as at risk of committing crime.

Learn more

Databases & data in policing and criminal justice

Across the world, police and criminal justice authorities hold vast databases, containing huge amounts of information about people, events and alleged crimes. This data includes police reports, incidents, and cautions and convictions, as well as images, addresses, associates, vehicles and other property, and data about people’s race or ethnicity, gender, nationality and more. They can also include police ‘intelligence’ – often uncorroborated information about people’s alleged involvement in crime or other activity.

Crime data is a record of the activity and decisions of police and criminal justice authorities, documenting the crimes, locations and groups that are most policed and criminalised in society. As such, the data held on those databases reflects the structural biases and inequalities in society, along lines of race, class, gender and other factors. For example, in the UK, Black people are policed and criminalised disproportionately more than white people on any measurement: stop and search, arrest, prosecution, pre-trial detention, imprisonment and more.

This data is used to justify or influence policing and criminal justice decisions, sometimes via ‘predictive’ or profiling systems, such as further monitoring, stop and search, questioning, arrest, prosecution, sentencing and probation. Increasingly, this information is also shared with other public authorities, affecting crucial, even life-changing decisions on immigration, housing, benefits, child custody or protection, and even punishment or exclusion in schools.

Our campaign in Europe

The EU is in the process of regulating the use of AI through the AI Act. However, the proposed law does not go far enough. As more and more countries turn to AI and ADM in criminal justice, it is crucial that the EU becomes a leading standard-setter.

In our report, Automating Injustice, we document and analyse how this technology is already being used across Europe and expose its harmful consequences.

The EU must ban the use of predictive, profiling and risk assessment AI and ADM systems in law enforcement and criminal justice. Strict safeguards on transparency and accountability must be introduced for all other related uses.

Thanks to our campaign, EU political leaders are starting to recognise the harms caused by these systems and support our call for them to be banned. In June 2023, the European Parliament voted to ban ‘predictive’ policing systems, as well as introduce transparency and accountability requirements, in a landmark vote. The law is now subject to negotiations between the European Parliament and other EU bodies, the European Council and European Commission.

Council of Europe

The Council of Europe, an international human rights organisation with 47 member states in Europe, is working on a new legal framework for the use of AI. It aims to produce a legally binding convention on AI, human rights democracy and the rule of law.

We are an observer member of the Committee on AI (CAI), the Council of Europe committee working on the framework. As part of this Committee, we are working to include recognition of the fundamental harms of ‘predictive’, profiling and other data-driven systems in law enforcement and criminal justice, and protect people and their rights, alongside other organisations.

Why we need to ban the use of AI to profile people

Find out more about why we need to ban the use of AI to profile people

Read more about our work on AI and criminal justice in  Reuters, New Scientist, EU Observer, Computer Weekly, Live Mint and TechCrunch.

What are the problems with AI?

There are fundamental flaws in how AI and automated systems are being implemented in criminal justice:

Discrimination and bias: AI and automated systems in criminal justice are designed, created and operated in a way that makes them predisposed to produce biased outcomes. This can stem from their purpose, such as targeting a certain type of crime or a specific area. It can also be a result of their use of biased data, which reflects structural inequalities in society and institutional biases in criminal justice and policing. As a result, these systems can reproduce and exacerbate discrimination based on race, ethnicity, nationality, socio-economic status and other grounds. These systemic, institutional and societal biases are so ingrained that it is questionable whether any AI or ADM system would produce unbiased outcomes.

Infringement of the presumption of innocence: Profiling people and taking action before a crime has been committed undermines the right to be presumed innocent until proven guilty in criminal proceedings. Often, these profiles and decisions are based not just on an individual’s behaviour but on factors far beyond their control. This may include the actions of people they are in contact with or even demographic information, such as data about the neighbourhood they live in.

Lack of transparency and routes for redress: Any system that has an influence on criminal justice decisions should be open to public scrutiny. However, technological barriers and deliberate efforts to conceal how the systems work for profit-driven reasons make it difficult to understand how such decisions are made. People are often unaware that they have been subject to an automated decision. Clear routes for challenging decisions – or the systems themselves – and for redress are also severely lacking.

These are serious issues that can seriously impact people’s lives, threaten equality and infringe fundamental rights, including the right to a fair trial.

What should states do?

States should implement regulation that ensures AI and ADM systems in criminal justice do not cause fundamental harms. Here is what states can do:

Profiling: Prohibit the use of predictive, profiling and risk assessment AI and ADM systems in law enforcement and criminal justice. Only an outright ban can protect people from the fundamental harms they cause. For other uses of AI and ADM in criminal justice, we want States to implement a set of strict legal safeguards:

Bias testing: For all other systems, there must be independent testing for biases at all stages of deployment, including the design and deployment phases, must be mandatory. To make such bias testing possible, data collection on criminal justice must be improved, including data separated by race, ethnicity, and nationality.

Transparency: It must be made clear how a system works, how it is operated, and how it has arrived at a decision. Everyone affected by these systems and their outputs, such as suspects and defendants, must be able to understand how they work, as well as the general public.

Evidence of decisions: Human decision-makers in criminal justice must provide reasons for their decisions and evidence of how decisions were influenced by AI and ADM systems.

Accountability: A person must be told whenever an AI or ADM system has or may have impacted a criminal justice decision related to them. There must also be clear procedures for people to challenge AI and ADM decisions, or the systems themselves, and routes for redress.

Want to know more about AI and criminal justice?

Sign up for updates on the use of artificial intelligence in criminal justice systems

Sign up here
Skip to toolbar