Brain processing of gloss information with 2D and 3D depth cues

Hua-Chun Sun, Massimiliano Di Luca, Roland Fleming, Alexander Muryy, Hiroshi Ban, Andrew Welchman

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Surface gloss information conveyed by image cues (i.e., highlights) has been shown to be processed in ventral and dorsal areas. In this study we used fMRI to distinguish the brain areas that selectively process 2D and 3D cues about surface gloss. We performed one experiment using 2D images of random objects with glossy surfaces where diffuse highlights could be presented rotated by 45 degree to make the object look matte. We also performed a second experiment with binocular cues where the specular reflections of the environmental on random shapes could have disparities coincident with the surface so to appear painted and thus making the object look matte. The same twelve participants took part in the two experiments where fMRI activations were measured over the whole brain with an Echo-Planar Imaging sequence (32 slices, TR 2000 ms, TE 35 ms, voxel size 2.5 × 2.5 × 3 mm). We performed Multi-Voxel Pattern Analysis to test whether a classifier trained to discriminate glossy vs. matte objects with 2D cues can still discriminate with 3D cues, and vice versa. We found transfer effects from 2D to 3D cues in early (V1, V2) and dorsal visual areas (V3d, V3A/B, V7, IPS). This transfer suggests the presence of circuits processing gloss independently of the type of cues in dorsal areas only. We did not find transfer from training with 3D cues to 2D cues, suggesting that stereoscopic information related to gloss has a pattern of activation that is additional to the representation of gloss. Meeting abstract presented at VSS 2015.

Original languageEnglish
Title of host publicationJournal of Vision
Publication statusPublished - 1 Sept 2015


Dive into the research topics of 'Brain processing of gloss information with 2D and 3D depth cues'. Together they form a unique fingerprint.

Cite this