Artificial intelligence (AI) and automated decision-making (ADM) systems are increasingly used by European law enforcement and criminal justice authorities to predict and profile people’s actions and assess their ‘risk’ of criminality or re-offending in the future.
These AI and ADM systems can reproduce and reinforce discrimination and exacerbate inequality on grounds including but not limited to race, socio-economic status and nationality, as well as engage and infringe fundamental rights, including the right to a fair trial and the presumption of innocence.
In this webinar we discuss Fair Trials’ new report, Automating Injustice, on the use of these systems and their harmful impact. With input from speakers with first-hand experience of the harm these systems cause, experts on AI and automated systems in criminal justice, and leading European policymakers.
Chair: Laure Baudrihaye-Gérard, Legal Director (Europe), Fair Trials
Panellists:
Diana Sardjoe, Founder, De Moeder is de Sleutel (The Mother is the Key), mother of children impacted by risk modelling and profiling systems
Petar Vitanov MEP (S&D) (Bulgaria), Rapporteur of LIBE committee AI in criminal matters report
Sarah Chander, Senior Policy Advisor, European Digital Rights (EDRi)
Martin Sacleux, Legal Advisor at the Council of Bars and Law Societies of Europe (CCBE)
Griff Ferris, Legal & Policy Officer, Fair Trials (presenting the Automating Injustice report)