Multimodal language processing: How preceding discourse constrains gesture interpretation and affects gesture integration when gestures do not synchronise with semantic affiliates.
Research output: Contribution to journal › Article › peer-review
Previous studies have suggested that a co-speech gesture needs to be synchronous with semantically related speech (semantic affiliates) for its successful semantic integration into a discourse model because co-speech gestures are often highly ambiguous on their own. But not all gestures synchronise with semantic affiliates, some precede them. The current study tested whether the interpretation of a gesture that does not synchronise with its semantic affiliate can be constrained by preceding verbal discourse and integrated into a recipient’s discourse model. A behavioural experiment (Experiment 1) showed that related discourse information can indeed constrain recipients’ interpretations of such gestures. Results from an ERP experiment (Experiment 2) confirmed that synchronisation between gesture and semantic affiliate is not essential in order for the gesture to become part of a discourse model, but only if the preceding context constrains the gesture’s meaning. In this case, we found evidence for post-semantic integration (P600, time-locked to the gesture’s semantic affiliate).
|Number of pages||17|
|Journal||Journal of Memory and Language|
|Early online date||24 Dec 2020|
|Publication status||Published - Apr 2021|
- gesture, ERP, synchrony, discourse