Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Authors

Colleges, School and Institutes

External organisations

  • De Montfort University

Abstract

This paper presents a multimodal affect recognition system for interactive virtual gaming environments using eye tracking and speech signals, captured in gameplay scenarios that are designed to elicit controlled affective states based on the arousal and valence dimensions. The Support Vector Machine is employed as a classifier to detect these affective states from both modalities. The recognition results reveal that eye tracking is superior to speech in affect detection and that the two modalities are complementary in the application of interactive gaming. This suggests that it is feasible to design an accurate multimodal recognition system to detect players' affects from the eye tracking and speech modalities in the interactive gaming environment. We emphasise the potential of integrating the proposed multimodal system into game interfaces to enhance interaction and provide an adaptive gaming experience.

Details

Original languageEnglish
Title of host publicationICMI 2017 - Proceedings of the 19th ACM International Conference on Multimodal Interaction
Publication statusPublished - 3 Nov 2017
Event19th ACM International Conference on Multimodal Interaction, ICMI 2017 - Glasgow, United Kingdom
Duration: 13 Nov 201717 Nov 2017

Conference

Conference19th ACM International Conference on Multimodal Interaction, ICMI 2017
CountryUnited Kingdom
CityGlasgow
Period13/11/1717/11/17