How the EU can achieve legally trustworthy AI: a response to the European Commission’s proposal for an Artificial Intelligence Act

Nathalie Smuha, Emma Ahmed-Rengers, Adam Harkens, Wenlong Li, James Maclaren, Riccardo Piselli, Karen Yeung

Research output: Other contribution

Abstract

This document contains the response to the European Commission’s Proposal for an Artificial Intelligence Act from members of the Legal, Ethical & Accountable Digital Society (LEADS) Lab at the University of Birmingham. The Proposal seeks to give expression to the concept of ‘Lawful AI.’ This concept was mentioned, but not developed in the Commission’s High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI (2019), which instead confined its discussion to the concepts of ‘Ethical’ and ‘Robust’ AI. After a brief introduction (Chapter 1), we set out the many aspects of the Proposal which we welcome, and stress our wholehearted support for its aim to protect fundamental rights (Chapter 2). Subsequently, we develop the concept of ‘Legally Trustworthy AI,’ arguing that it should be grounded in respect for three pillars on which contemporary liberal democratic societies are founded, namely: fundamental rights, the rule of law, and democracy (Chapter 3). Drawing on this conceptual framework, we first argue that the Proposal fails to reflect fundamental rights as claims with enhanced moral and legal status, which subjects any rights interventions to a demanding regime of scrutiny and must satisfy tests of necessity and proportionality. Moreover, the Proposal does not always accurately recognise the wrongs and harms associated with different kinds of AI systems and appropriately allocates responsibility for them. Second, the Proposal does not provide an effective framework for the enforcement of legal rights and duties, and does not ensure legal certainty and consistency, which are essential for the rule of law. Third, the Proposal neglects to ensure meaningful transparency, accountability, and rights of public participation, thereby failing to reflect adequate protection for democracy (Chapter 4). Based on these shortcomings in respecting and promoting the three pillars of Legally Trustworthy AI, we provide detailed recommendations for the Proposal’s revision (Chapter 5)
Original languageEnglish
TypeSubmission to Public Consultation
Media of outputWritten submission
PublisherSSRN
Number of pages64
Publication statusPublished - 5 Aug 2021

Publication series

NameArtificial Intelligence - Law, Policy, & Ethics eJournal
PublisherSSRN Network

Keywords

  • artificial intelligence
  • fundamental rights
  • democracy
  • rule of law
  • Lawful AI
  • Legally Trustworthy AI
  • regulation

Fingerprint

Dive into the research topics of 'How the EU can achieve legally trustworthy AI: a response to the European Commission’s proposal for an Artificial Intelligence Act'. Together they form a unique fingerprint.

Cite this