Abstract
This paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture’s capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits.
Original language | English |
---|---|
Journal | Kuenstliche Intelligenz: Forschung, Entwicklung, Erfahrungen |
Early online date | 23 Sept 2019 |
DOIs | |
Publication status | E-pub ahead of print - 23 Sept 2019 |
Keywords
- Human–robot collaboration
- Explanations
- Non-monotonic logical reasoning
- Probabilistic planning