Towards Anchoring Self-Learned Representations to Those of Other Agents

Research output: Contribution to conference (unpublished)Paper

Authors

  • Martina Zambelli
  • Tobias Fischer
  • Maxime Petit
  • Antoine Cully
  • Yiannis Demiris

Colleges, School and Institutes

External organisations

  • Imperial College London

Abstract

In the future, robots will support humans in their every day activities. One particular challenge that robots will face is understanding and reasoning about the actions of other agents in order to cooperate effectively with humans. We propose to tackle this using a developmental framework, where the robot incrementally acquires knowledge, and in particular 1) self-learns a mapping between motor commands and sensory consequences,
2) rapidly acquires primitives and complex actions by verbal descriptions and instructions from a human partner, 3) discovers correspondences between the robots body and other articulated objects and agents, and 4) employs these correspondences to transfer the knowledge acquired from the robots point of view to the viewpoint of the other agent.
We show that our approach requires very little a-priori knowledge to achieve imitation learning, to find correspondent body parts of humans, and allows taking the perspective of another agent. This represents a step
towards the emergence of a mirror neuron like system based on self-learned representations.

Details

Original languageEnglish
Number of pages6
Publication statusPublished - 10 Oct 2016
EventWorkshop on Bio-inspired Social Robot Learning in Home Scenarios: in IEEE/RSJ International Conference on Intelligent Robots and Systems 2016 - Daejeon, Korea, Republic of
Duration: 10 Oct 201610 Oct 2016

Conference

ConferenceWorkshop on Bio-inspired Social Robot Learning in Home Scenarios
CountryKorea, Republic of
CityDaejeon
Period10/10/1610/10/16