Selective use of gaze information to improve ASR performance in noisy environments by cache-based class language model adaptation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Authors

Colleges, School and Institutes

External organisations

  • University of Birmingham

Abstract

Using information from a person's gaze has potential to improve ASR performance in acoustically noisy environments. However, previous work has resulted in relatively minor improvements. A cache-based language model adaptation framework is presented where the cache contains a sequence of gaze events, classes represent visual context and task, and the relative importance of gaze events is considered. An implementation in a full ASR system is described and evaluated on a set of gaze-speech data recorded in both a quiet and acoustically noisy environment. Results demonstrate that selectively using gaze events based on measured characteristics significantly increases the performance improvements in WER on speech recorded in the noisy environment from 6:34% to 10:58%. This work highlights: The need to selectively use information from gaze, to constrain the redistribution of probability mass between words during adaptation via classes, and to evaluate the system with gaze and speech collected in environments that represent the real-world utility. Copyright © 2013 ISCA.

Details

Original languageEnglish
Title of host publicationINTERSPEECH'13
Publication statusPublished - 2013

Keywords

  • Cache, Gaze, Language model adaptation, Multimodal, Speech recognition