Abstract
When reporting on the EPSRC Human–Like Computing (HLC) workshop to the human–computer interaction (HCI) community, I identified four main goals for the area:
1) emulating human capabilities as a good model for general AI and robotics
2) improving interaction with people through human–like computation
3) developing new interaction paradigms for interacting with HLC agents
4) learning more about human cognition and embodiment through HLC
The second of these is the key focus of the MI20-HLC call:
" Human-Like Computing (HLC) research aims to endow machines with human-like perceptual, reasoning and learning abilities which support collaboration and communication with human beings."
However, this goal by necessity implies the third, as more human-like capabilities by their nature change the nature of interaction design, which, for the past thirty years, focused on the control of the computer as a relatively passive partner.
The first and the last goals will be important secondary outcomes for those on AI/robotics and cognitive science/HCI respectively and are likely to be mutually reinforcing. Indeed I found computational modelling of regret both improved machine learning and also helped validate and elucidate a cognitive model of regret.
An obvious application of (i) is to help with (ii), again something I have found myself in collaborative work on web-scale inference, inspired by spreading activation models of the brain, but then applied to aiding human form-filling. Although paradoxically, as was evident with Weizenbaum’s Eliza in the 1960s and Ramanee Peiris's work on personal interviews in the 1990s, the most human-like interactions may not depend on human-like computation! Yet this paradox might resolve as in preliminary work on the emergence of 'self', I suggest that the best way to create systems that embody human-like internal dynamics, may be to focus on human-like external behaviour.
From a HCI point of view (ii) and (iii) are most central. The core of HCI is to understand embodied interactions of people with computers and one another in real world situations, a crucial input into (ii). However, as noted, most user interface design advice assumes a passive computational device. I've been involved in some formal modelling of interactions where the computer system is more active, and there is work on ambient intelligence and human-robot interactions, but substantial research is needed on (iii).
I have also had a long-standing personal interest in the broader social and societal issues of IT and AI including the first paper on privacy in the HCI literature. As far back as 1992, " Human Issues in the use of Pattern Recognition Techniques" looked at issues with black-box algorithms including the potential for gender and ethnic discrimination, issues that have recently come to the fore both with celebrated cases, such as Google's 'racist' search results, and the EU General Data Protection Regulation, which will mean that, in some circumstances, algorithms will have to be able explain their results. Of course, this too is a challenge not an obstacle, indeed the 1992 paper led directly to the development of more humanly comprehensible database interrogation algorithms.
1) emulating human capabilities as a good model for general AI and robotics
2) improving interaction with people through human–like computation
3) developing new interaction paradigms for interacting with HLC agents
4) learning more about human cognition and embodiment through HLC
The second of these is the key focus of the MI20-HLC call:
" Human-Like Computing (HLC) research aims to endow machines with human-like perceptual, reasoning and learning abilities which support collaboration and communication with human beings."
However, this goal by necessity implies the third, as more human-like capabilities by their nature change the nature of interaction design, which, for the past thirty years, focused on the control of the computer as a relatively passive partner.
The first and the last goals will be important secondary outcomes for those on AI/robotics and cognitive science/HCI respectively and are likely to be mutually reinforcing. Indeed I found computational modelling of regret both improved machine learning and also helped validate and elucidate a cognitive model of regret.
An obvious application of (i) is to help with (ii), again something I have found myself in collaborative work on web-scale inference, inspired by spreading activation models of the brain, but then applied to aiding human form-filling. Although paradoxically, as was evident with Weizenbaum’s Eliza in the 1960s and Ramanee Peiris's work on personal interviews in the 1990s, the most human-like interactions may not depend on human-like computation! Yet this paradox might resolve as in preliminary work on the emergence of 'self', I suggest that the best way to create systems that embody human-like internal dynamics, may be to focus on human-like external behaviour.
From a HCI point of view (ii) and (iii) are most central. The core of HCI is to understand embodied interactions of people with computers and one another in real world situations, a crucial input into (ii). However, as noted, most user interface design advice assumes a passive computational device. I've been involved in some formal modelling of interactions where the computer system is more active, and there is work on ambient intelligence and human-robot interactions, but substantial research is needed on (iii).
I have also had a long-standing personal interest in the broader social and societal issues of IT and AI including the first paper on privacy in the HCI literature. As far back as 1992, " Human Issues in the use of Pattern Recognition Techniques" looked at issues with black-box algorithms including the potential for gender and ethnic discrimination, issues that have recently come to the fore both with celebrated cases, such as Google's 'racist' search results, and the EU General Data Protection Regulation, which will mean that, in some circumstances, algorithms will have to be able explain their results. Of course, this too is a challenge not an obstacle, indeed the 1992 paper led directly to the development of more humanly comprehensible database interrogation algorithms.
Original language | English |
---|---|
Number of pages | 3 |
Publication status | Published - 25 Oct 2016 |
Event | Machine Intelligence 20 - Human-Like Computing Workshop (MI20-HLC) - Windsor, United Kingdom Duration: 23 Oct 2016 → 25 Oct 2016 |
Conference
Conference | Machine Intelligence 20 - Human-Like Computing Workshop (MI20-HLC) |
---|---|
Country/Territory | United Kingdom |
City | Windsor |
Period | 23/10/16 → 25/10/16 |
Keywords
- Human–computer interaction
- Web-scale reasoning
- Spreading activation
- Consciousness of self
- Regret
- Cognitive models
- Low-intention interaction
- Intelligent interfaces
- Human–like computing