Criminal justice and law enforcement decisions are increasingly influenced and even made by artificial intelligence (AI), including machine-learning algorithms and automated-decision making.
One of the most controversial areas is ‘predictive policing’, in which AI is used to ‘predict’ the likelihood of certain criminal acts occurring in a particular area, profile individuals as likely to commit criminal acts in the future, or assess people’s ‘risk’ of criminality.
There are significant and fundamental issues with predictive policing and its use in criminal justice, not least because of the clear stereotyping and discrimination upon which these models are based, but also the unjust and discriminatory decisions that they have been shown to generate.
These automated systems engage and infringe fundamental rights: the right to a fair trial, especially the presumption of innocence, privacy and data protection rights, as well as discrimination based on race, socio-economic status or class, nationality and background.
Watch our panellists explain how these systems work, expose their fundamental flaws and harmful impacts, and the need for strong legal protections and frameworks. Led by Griff Ferris, Legal and Policy Officer at Fair Trials, the experts in this webinar included: