Crossmodal content binding in information-processing architectures

H Jacobsson, Nicholas Hawes, G-J Kruijff, Jeremy Wyatt

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Citations (Scopus)
4 Downloads (Pure)


Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any
representation a single sensor could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with sensory information, to ensure that the interpretations of these symbolic representations
are grounded in the situated context. Previous approaches to this problem have used techniques such as (lowlevel) information fusion, ontological reasoning, and (highlevel) concept learning. This paper presents a framework in which these, and related approaches, can be used to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate
how the framework supports behaviours commonly required of an intelligent robot.
Original languageEnglish
Title of host publicationProceedings of the 3rd ACM/IEEE international conference on Human robot interaction
PublisherAssociation for Computing Machinery
Number of pages8
ISBN (Print)978-1-60558-017-3
Publication statusPublished - 15 Mar 2008
EventACM/IEEE international conference on Human robot interaction, 3rd - Amsterdam, Netherlands
Duration: 12 Mar 200815 Mar 2008


ConferenceACM/IEEE international conference on Human robot interaction, 3rd


Dive into the research topics of 'Crossmodal content binding in information-processing architectures'. Together they form a unique fingerprint.

Cite this