Selective use of gaze information to improve ASR performance in noisy environments by cache-based class language model adaptation
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
Colleges, School and Institutes
- Department of Electronic, Electrical and Computer Engineering, University of Birmingham, Birmingham, B15 2TT, U.K.
Using information from a person's gaze has potential to improve ASR performance in acoustically noisy environments. However, previous work has resulted in relatively minor improvements. A cache-based language model adaptation framework is presented where the cache contains a sequence of gaze events, classes represent visual context and task, and the relative importance of gaze events is considered. An implementation in a full ASR system is described and evaluated on a set of gaze-speech data recorded in both a quiet and acoustically noisy environment. Results demonstrate that selectively using gaze events based on measured characteristics significantly increases the performance improvements in WER on speech recorded in the noisy environment from 6:34% to 10:58%. This work highlights: The need to selectively use information from gaze, to constrain the redistribution of probability mass between words during adaptation via classes, and to evaluate the system with gaze and speech collected in environments that represent the real-world utility. Copyright © 2013 ISCA.
|Title of host publication||INTERSPEECH'13|
|Publication status||Published - 2013|