Cross-modal visuo-tactile matching in a patient with a semantic disorder

Sara Forti, Glyn Humphreys

Research output: Contribution to journalArticle

9 Citations (Scopus)


This study is concerned with the nature of object representations coded within and between vision and touch, assessed through a study of perceptual matching abilities in a patient with impaired semantic knowledge for objects: JP. Prior work with JP has indicated that she has a category-specific deficit that is particularly severe for tools (see the Case report and Humphreys, G. W., Vernier, M. -P., & Riddoch, M. J. A semantic deficit for tools, in preparation). Here, we test whether, despite this semantic deficit, JP can per-form object matching under various conditions. We demonstrate that JP could perform matching across sensory modalities (between touch and vision) when objects appeared in the same view, but this did not generalise across views. In addition, JP was able to match from 3D felt representations to 2D visual representations provided the stimuli were real (previously familiar) objects. The data support the idea that matching between touch and vision can be based on common view-specific, perceptual representations, sensitive to the familiarity of individual objects. (c) 2005 Elsevier Ltd. All rights reserved.
Original languageEnglish
Pages (from-to)1568-1579
Number of pages12
Issue number11
Publication statusPublished - 1 Jan 2005


  • view-invariance
  • perceptual representation
  • tactile representation


Dive into the research topics of 'Cross-modal visuo-tactile matching in a patient with a semantic disorder'. Together they form a unique fingerprint.

Cite this