Police forces and criminal justice authorities across Europe are using data, algorithms and artificial intelligence (AI) to ‘predict’ if certain people are at ‘risk’ of committing crime or likely to commit crimes in future, and whether and where crime will occur in certain areas in future.
We at Fair Trials are calling for a ban on ‘predictive’ policing and justice systems. Take the quiz below to see if you’d be profiled or seen as a ‘risk’ – and find out how to support our campaign.
Unlike the authorities, we will of course not collect or retain any information about you or your answers!
Through our research, we know that more and more police forces and criminal justice authorities across Europe are using AI and other data-driven systems to profile people and try and ‘predict’ whether they might commit a crime in the future or are at ‘risk’ of criminality, and profile areas to ‘predict’ whether crime will occur there in future. These systems have been shown to use discriminatory and flawed data and profiles to make these assessments and predictions. They try to determine your risk of criminality or predict the locations of crime based on:
These systems have been used by police and criminal justice authorities to decide whether to target or take action against people – including children – and areas or locations, such as:
Predictive systems used in policing and criminal justice:
The European Union (EU) is discussing a new law to regulate the use of AI. The Artificial Intelligence Act will bring in some safeguards, limiting and even banning some uses of AI to protect people and their rights in the EU, but it does not go far enough. We are calling for a ban on predictive policing and justice AI systems to be included in the #AIAct, alongside several other safeguards. Many MEPs in the European Parliament agree with us, but we need to persuade more of them that these flawed systems must be banned.
These automated and algorithmic systems are often secret and opaque, with authorities refusing to provide information on how they work. However, we will of course explain our ‘algorithm’. All of the questions in our example profiling tool are matched directly to information which is actively used by law enforcement and criminal justice authorities in their own versions of predictive and profiling systems and databases. The reality is that if a person fits just a few of the pieces of information (as asked by our questions), it can be enough to be marked as a ‘risk’. Likewise, if an area fits a similar profile, it too will be marked as at ‘risk’ of crime occurring. These assessments are obviously discriminatory and unjust – we have made our own transparent and explainable version to show just how discriminatory and unjust these systems are.
0 — 3 ‘Yes’ answers: ‘Low’ risk outcome
4 — 5 ‘Yes’ answers: ‘Medium’ risk outcome
6 — 10 ‘Yes’ answers: ‘High’ risk outcome