Abstract
Artificial General intelligence (AGI) offers enormous benefits for humanity, yet it also poses great risk. The aim of this systematic review was to summarise the peer reviewed literature on the risks associated with AGI. The review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Sixteen articles were deemed eligible for inclusion. Article types included in the review were classified as philosophical discussions, applications of modelling techniques, and assessment of current frameworks and processes in relation to AGI. The review identified a range of risks associated with AGI, including AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks. Several limitations of the AGI literature base were also identified, including a limited number of peer reviewed articles and modelling techniques focused on AGI risk, a lack of specific risk research in which domains that AGI may be implemented, a lack of specific definitions of the AGI functionality, and a lack of standardised AGI terminology. Recommendations to address the identified issues with AGI risk research are required to guide AGI design, implementation, and management.
Original language | English |
---|---|
Journal | Journal of Experimental and Theoretical Artificial Intelligence |
Early online date | 13 Aug 2021 |
DOIs | |
Publication status | E-pub ahead of print - 13 Aug 2021 |
Bibliographical note
Publisher Copyright:© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
Keywords
- Artificial General Intelligence
- artificial intelligence
- existential threat
- risk
- safety
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Artificial Intelligence