Abstract
Robots collaborating with humans need to represent knowledge, reason, and learn, at the sensorimotor level and the cognitive level. This paper summarizes the capabilities of an architecture that combines the complementary strengths of declarative programming, probabilistic graphical models, and reinforcement learning, to represent, reason with, and learn from, qualitative and quantitative descriptions of incomplete domain knowledge and uncertainty. Representation and reasoning is based on two tightly-coupled domain representations at different resolutions. For any given task, the coarseresolution symbolic domain representation is translated to an Answer Set Prolog program, which is solved to
provide a tentative plan of abstract actions, and to explain unexpected outcomes. Each abstract action is implemented by translating the relevant subset of the corresponding fine-resolution probabilistic representation to a partially observable Markov decision process (POMDP). Any high probability beliefs, obtained by the execution of actions based on the POMDP policy, update the coarse-resolution representation. When incomplete knowledge of the rules governing the domain dynamics results in plan execution not achieving the
desired goal, the coarse-resolution and fine-resolution representations are used to formulate the task of incrementally and interactively discovering these rules as a reinforcement learning problem. These capabilities are illustrated in the context of a mobile robot deployed in an indoor office domain.
provide a tentative plan of abstract actions, and to explain unexpected outcomes. Each abstract action is implemented by translating the relevant subset of the corresponding fine-resolution probabilistic representation to a partially observable Markov decision process (POMDP). Any high probability beliefs, obtained by the execution of actions based on the POMDP policy, update the coarse-resolution representation. When incomplete knowledge of the rules governing the domain dynamics results in plan execution not achieving the
desired goal, the coarse-resolution and fine-resolution representations are used to formulate the task of incrementally and interactively discovering these rules as a reinforcement learning problem. These capabilities are illustrated in the context of a mobile robot deployed in an indoor office domain.
Original language | English |
---|---|
Title of host publication | Papers from the 2016 AAAI Spring Symposium |
Subtitle of host publication | No. 3: Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform |
Publisher | AAAI Press |
Number of pages | 7 |
Publication status | Published - 21 Mar 2016 |
Event | AAAI Spring Symposium on Enabling Computing Research in Socially Intelligent Human-Robot Interaction - Stanford, United States Duration: 21 Mar 2016 → 23 Mar 2016 |
Publication series
Name | AAAI Spring Sympoium Series |
---|
Conference
Conference | AAAI Spring Symposium on Enabling Computing Research in Socially Intelligent Human-Robot Interaction |
---|---|
Country/Territory | United States |
City | Stanford |
Period | 21/03/16 → 23/03/16 |