Towards a Theory of Explanations for Human–Robot Collaboration

Mohan Sridharan, Ben Meadows

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)
104 Downloads (Pure)

Abstract

This paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture’s capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits.
Original languageEnglish
JournalKuenstliche Intelligenz: Forschung, Entwicklung, Erfahrungen
Early online date23 Sep 2019
DOIs
Publication statusE-pub ahead of print - 23 Sep 2019

Keywords

  • Human–robot collaboration
  • Explanations
  • Non-monotonic logical reasoning
  • Probabilistic planning

Fingerprint

Dive into the research topics of 'Towards a Theory of Explanations for Human–Robot Collaboration'. Together they form a unique fingerprint.

Cite this