Selective use of gaze information to improve ASR performance in noisy environments by cache-based class language model adaptation

Ao Shen, Neil Cooke, Martin Russell

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Using information from a person's gaze has potential to improve ASR performance in acoustically noisy environments. However, previous work has resulted in relatively minor improvements. A cache-based language model adaptation framework is presented where the cache contains a sequence of gaze events, classes represent visual context and task, and the relative importance of gaze events is considered. An implementation in a full ASR system is described and evaluated on a set of gaze-speech data recorded in both a quiet and acoustically noisy environment. Results demonstrate that selectively using gaze events based on measured characteristics significantly increases the performance improvements in WER on speech recorded in the noisy environment from 6:34% to 10:58%. This work highlights: The need to selectively use information from gaze, to constrain the redistribution of probability mass between words during adaptation via classes, and to evaluate the system with gaze and speech collected in environments that represent the real-world utility. Copyright © 2013 ISCA.

Original languageEnglish
Title of host publicationINTERSPEECH'13
Pages1844-1848
Number of pages5
Publication statusPublished - 2013

Keywords

  • Cache
  • Gaze
  • Language model adaptation
  • Multimodal
  • Speech recognition

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'Selective use of gaze information to improve ASR performance in noisy environments by cache-based class language model adaptation'. Together they form a unique fingerprint.

Cite this