Automatic guidance of visual attention from verbal working memory

David Soto Blanco, Glyn Humphreys

Research output: Contribution to journalArticle

131 Citations (Scopus)

Abstract

Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize, Experiment 2; or merely to attend, Experiment 3) and subsequently were required to search for a target among different distractors, each embedded within a colored shape. In half of the trials, an object in the search array matched the prime, but this object never contained the target. Despite this, search was impaired relative to a neutral baseline in which the prime and search displays did not match. An interesting finding is that verbal primes were effective in generating the effects, and verbalization of visual primes elicited similar effects to those elicited when primes were held in WM. However, the effects were absent when primes were only attended. The data suggest that there is automatic encoding into WM when items are verbalized and that verbal as well as visual WM can guide visual attention.
Original languageEnglish
Pages (from-to)730-737
Number of pages8
JournalJournal of Experimental Psychology: Human Perception and Performance
Volume33
DOIs
Publication statusPublished - 1 Jan 2007

Keywords

  • top-down processing
  • working memory
  • visual search
  • visual attention

Fingerprint

Dive into the research topics of 'Automatic guidance of visual attention from verbal working memory'. Together they form a unique fingerprint.

Cite this