Causal inference and temporal predictions in audiovisual perception of speech and music

Uta Noppeney, Hwee Ling Lee

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)
633 Downloads (Pure)

Abstract

To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross‐sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses.
Original languageEnglish
JournalAnnals of the New York Academy of Sciences
Early online date31 Mar 2018
DOIs
Publication statusE-pub ahead of print - 31 Mar 2018

Fingerprint

Dive into the research topics of 'Causal inference and temporal predictions in audiovisual perception of speech and music'. Together they form a unique fingerprint.

Cite this