Abstract
The next generation of artificial intelligence, known as Artificial General Intelligence (AGI), could either revolutionise or destroy humanity. Human Factors and Ergonomics (HFE) has a critical role to play in the design of safe and ethical AGI; however, there is little evidence that HFE is contributing to development programs. This paper presents the findings from a study which involved the use of the Work Domain Analysis-Broken Nodes approach to identify the risks that could emerge in a future ‘envisioned world’ AGIbased unmanned combat aerial vehicle system. The findings demonstrate that there are various potential risks, but that the most critical arise not due to poor performance, but rather when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it. The urgent need for further work exploring the design of AGI controls is emphasised.
| Original language | English |
|---|---|
| Pages (from-to) | 560-564 |
| Number of pages | 5 |
| Journal | Proceedings of the Human Factors and Ergonomics Society |
| Volume | 66 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 27 Oct 2022 |
| Event | 66th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2022 - Atlanta, United States Duration: 10 Oct 2022 → 14 Oct 2022 |
Bibliographical note
Publisher Copyright:© 2022 by Human Factors and Ergonomics Society. All rights reserved.
ASJC Scopus subject areas
- Human Factors and Ergonomics