
Ahead of the AI Act vote in the European Parliament, civil society calls on Members of the European Parliament (MEPs) to ensure the EU Artificial Intelligence Act (AI Act) prioritises fundamental rights and protects people affected by artificial intelligence (AI) systems.
Increasingly we see the deployment of AI systems to monitor and identify us in public spaces, predict our likelihood of criminality, re-direct policing and immigration control to already over-surveilled areas, facilitate violations of the right to claim asylum and the presumption of innocence, predict our emotions and categorise us using discriminatory inferences, and to make crucial decisions about us that determine our access to welfare, education and employment.
Without proper regulation, AI systems will exacerbate existing societal harms of mass surveillance, structural discrimination, centralised power of large technology companies, the unaccountable public decision-making and environmental extraction. The complexity, lack of accountability and public transparency, and few available processes for redress present challenges for people to enforce their rights when harmed by AI systems. In particular, these barriers present a particular risk for the most marginalised in society.
The EU’s AI Act can, and should, address these issues, ensuring that AI development and use operates within a framework of accountability, transparency and appropriate, fundamental-rights based limitations. We are calling for MEPs to ensure the following in the AI Act vote:
1. Empower people affected by AI systems
• Ensure horizontal and mainstreamed accessibility requirements for all AI systems;
• Ensure people affected by AI systems are notified and have the right to seek information when affected by AI-assisted decisions and outcomes;
• Include a right for people affected to lodge a complaint with a national authority, if their rights have been violated by the use of an AI system;
• Include a right to representation of natural persons and the right for public interest organisations to lodge standalone complaints with a national supervisory authority;
• Include rights to effective remedies for the infringement of rights.
2. Ensure accountability and transparency for the use of AI
• Include an obligation on users to conduct and publish a fundamental rights impact assessment before each deployment of a high-risk AI system and meaningfully engage civil society and affected people in this process
• Require all users of high-risk AI systems, and users of all systems in the public sphere, to register the use in the European AI database before deployment;
• Ensure that the classification process for high-risk AI systems prioritises legal certainty and provides no loophole for providers to circumvent legal scrutiny;
• Ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU.
3. Prohibit AI systems that pose an unacceptable risk for fundamental rights
• A full ban on real-time and post remote biometric identification in publicly accessible spaces, by all actors, without exception;
• A prohibition of all forms of predictive and profiling systems in law enforcement and criminal justice (location / place-based and person-based);
• Prohibitions on AI in migration contexts to make individual risk assessments and profiles based on personal and sensitive data, and predictive analytic systems when used to interdict, curtail and prevent migration;
• A prohibition on biometric categorisation systems that categorise natural persons according to sensitive or protected attributes as well as the use of any biometric categorisation and automated behavioural detection systems in publicly accessible spaces;
• A ban on the use of emotion recognition systems to infer people’s emotions and mental states from physical, physiological, behavioural, as well as biometric data.
We call on MEPs to vote to include these protections in the AI act and ensure the Regulation is a vehicle for the promotion of fundamental rights and social justice.
For a detailed outline of how the AI Act can better protect fundamental rights, see this statement signed by 123 civil society organisations. More information on amendments proposed by civil society can be found here.
Signed,
1. European Digital Rights (EDRi)
2. Access Now
3. Algorithm Watch
4. Amnesty International
5. Article 19
6. Bits of Freedom
7. Electronic Frontier Norway (EFN)
8. European Center for Not-for-Profit Law (ECNL)
9. European Disability Forum
10. Fair Trials
11. Homo Digitalis
12. Irish Council for Civil Liberties (ICCL)
13. Panoptykon Foundation
14. Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM)
15. #jesuislà
16. Afrique Culture Maroc
17. AI Forensics
18. AI Now Institute
19. Alternatif Bilisim (AiA)
20. Alliance4Europe
21. Are You Syrious?
22. Association for Juridical Studies on Immigration (ASGI)
23. autonomic
24. Avaaz Foundation
25. Baobab Experience
26. Border Violence Monitoring Network
27. Centre for Youths Integrated Development
28. Civil Liberties Union for Europe
29. Coalition For Women In Journalism (CFWIJ)
30. Coalizione Italiana Libertà e Diritti civili
31. Comisión General Justicia y Paz
32. DataEthics.eu
33. Defend Democracy
34. Deutsche Vereinigung für Datenschutz e.V. (DVD)
35. Digitalcourage
36. Digitale Gesellschaft, Switzerland
37. Državljan D / Citizen D
38. Each One Teach One (EOTO) e. V.
39. Ekō
40. Equipo Decenio Afrodescendiente España
41. Eumans
42. European Civic Forum
43. European Network Against Racism (ENAR)
44. European Sex Workers Rights Alliance
45. Fair Trials
46. Fair Vote UK
47. Faith Matters EU
48. FUNDACIÓN SECRETARIADO GITANO
49. Gong
50. Greek Forum of Migrants
51. Health Action International
52. Hermes Center
53. horizontl Collaborative
54. IT-Pol Denmark
55. Ivorian Community of Greece
56. La Strada International
57. Lafede.cat
58. Lie Detectors
59. Ligue des droits humains
60. Migrants’ Rights Network
61. Mujeres Supervivientes
62. Open Knowledge Foundation Germany
63. ORBITvzw
64. Privacy International
65. Queerstion Media
66. Racism and Technology Center
67. Refugee Law Lab, York University
68. Refugees in danger (NGO) – Denmark
69. save space e.V.
70. Search for Common Ground
71. SOS RACISMO GIPUZKOA
72. Stichting The London Story
73. Superbloom (previously Simply Secure)
74. The Daphne Caruana Galizia Foundation
75. UNI Europa