Abstract
Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It has the ability to identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making processes. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the possibility of computational manipulation. It can make a user see and hear what is more likely to affect their decision-making processes, creating the perfect text accompanied by perfect images and sounds on the perfect website. Multiple international, regional and national bodies recognised the existence of computational manipulation and the possible threat to fundamental rights resulting from its use. The EU even moved the first steps towards protecting individuals against computational manipulation. This paper argues that while manipulative AIs which rely on deception are addressed by existing EU legislation, some forms of computational manipulation, specifically if LGAIM is used in the manipulative process, still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing EU legislation to cover every aspect of computational manipulation.
| Original language | English |
|---|---|
| Title of host publication | The Cambridge Handbook of Generative AI and the Law |
| Editors | Mimi Zou, Cristina Poncibò, Martin Ebers, Ryan Calo |
| Publisher | Cambridge University Press |
| Chapter | 4 |
| Pages | 43-64 |
| Number of pages | 22 |
| ISBN (Electronic) | 9781009492577, 9781009492553 |
| ISBN (Print) | 9781009492584 |
| DOIs | |
| Publication status | Published - Aug 2025 |
Keywords
- AI Act
- Computational Manipulation
- Persuasive Technology
- Dark Patterns
- Hydernudge
- Artifical Intelligence
- Generative AI
- AI Regulation
- GDPR
- UCPD