Abstract
Autonomous robots that are to assist humans in their daily lives must recognize and understand the meaning of objects in their environment. However, the open nature of the world means robots must be able to learn and extend their knowledge about previously unknown objects on-line. In this work we investigate the problem of unknown object hypotheses generation, and employ a semantic Web-mining framework along with deep-learning-based object detectors. This allows us to make use of both visual and semantic features in combined hypotheses generation. Experiments on data from mobile robots in real world application deployments show that this combination improves performance over the use of either method in isolation.
Original language | English |
---|---|
Title of host publication | 2017 IEEE International Conference on Robotics and Automation (ICRA) |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 2774-2779 |
Number of pages | 6 |
ISBN (Electronic) | 9781509046331 |
ISBN (Print) | 9781509046348 (PoD) |
DOIs | |
Publication status | Published - 24 Jul 2017 |
Event | 2017 IEEE International Conference on Robotics and Automation (ICRA 2017) - Singapore Duration: 29 May 2017 → 3 Jun 2017 |
Conference
Conference | 2017 IEEE International Conference on Robotics and Automation (ICRA 2017) |
---|---|
City | Singapore |
Period | 29/05/17 → 3/06/17 |
Keywords
- Semantics
- Knowledge based systems
- Three-dimensional displays
- Visualization
- Mobile robots
- Service robots