Learning the grounding of expressions for spatial relations between objects

Tiago Mota, Mohan Sridharan

Research output: Contribution to conference (unpublished)Paperpeer-review

43 Downloads (Pure)

Abstract

Robots interacting with humans often have to recognize, reason about and describe the spatial relations between objects. Prepositions are often used to describe such spatial relations, but it is difficult to equip a robot with comprehensive knowledge of these prepositions. This paper describes an architecture for incrementally learning and revising the grounding of spatial relations between objects. Answer Set Prolog, a declarative language, is used to represent and reason with incomplete knowledge that includes prepositional relations between objects in a scene. A generic grounding of prepositions
for spatial relations, human input (when available), and nonmonotonic logical inference, are used to infer spatial relations in 3D point clouds of given scenes, incrementally acquiring and revising a specialized metric grounding of the prepositions, and learning the relative confidence associated with each grounding. The architecture is evaluated on a benchmark dataset of tabletop images and on complex, simulated scenes of furniture.
Original languageEnglish
Number of pages6
Publication statusPublished - 21 May 2018
EventWorkshop on Perception, Inference and Learning for Joint Semantic, Geometric and Physical Understanding at ICRA 2018 - Brisbane, Australia
Duration: 21 May 201821 May 2018

Conference

ConferenceWorkshop on Perception, Inference and Learning for Joint Semantic, Geometric and Physical Understanding at ICRA 2018
Country/TerritoryAustralia
CityBrisbane
Period21/05/1821/05/18

Fingerprint

Dive into the research topics of 'Learning the grounding of expressions for spatial relations between objects'. Together they form a unique fingerprint.

Cite this