Skip to main navigation Skip to search Skip to main content

Speech-Based Emotion Recognition and Classification Integrating a CNN and BiLSTM Network

  • Fatima Uroosa
  • , Asim Abbas*
  • , Muhammad Tayyab Zamir
  • , Grigori Sidorov
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Speech emotion recognition (SER) has gained significant interest in recent times, which utilizes speech sounds to identify the emotional state of speakers. Accurate recognition of subtle emotional variations in speech, such as distinguishing closely related emotional states, remains a challenging problem in speech emotion recognition (SER), due to the variability of speech signals and the acoustic similarity among emotion classes across different speakers and linguistic contexts. This paper proposes a hybrid deep learning model that integrates a Convolutional Neural Network (CNN) with a Bidirectional Long Short-Term Memory (BiLSTM) network to effectively identify both spectral and temporal features of speech. The log Mel-frequency spectral coefficients (MFSC) are used as input features to represent discriminative spectral representations, while the BiLSTM layer model represents long-range temporal dependencies in speech signals. The proposed framework is evaluated on the Toronto Emotional Speech Set (TESS), a publicly available dataset of acted emotional speech containing seven emotion classes. The experimental findings show that the hybrid CNN-BiLSTM achieved an overall classification accuracy of 96.36%, significantly outperforming baseline models including GRU (91.84%), BiLSTM (93.12%), and CNN–GRU (94.67%). These findings highlight the effectiveness of combining spectral and temporal modeling for improved speech emotion recognition performance. Furthermore, our CNN+BiLSTM approach offers a computationally efficient and data-efficient alternative to transformer-based models, while still effectively capturing both spatial and temporal emotional cues in speech, making it suitable for real-time and resource-constrained applications.

Original languageEnglish
Title of host publicationLanguage Resources and Evaluation Conference (LREC-2026)
Subtitle of host publicationWorkshop on Computational Affective Science
Number of pages9
Publication statusPublished - 3 Mar 2026
Event1st Workshop on Computational Affective Science - Palma de Mallorca, Spain
Duration: 16 May 202616 May 2026

Workshop

Workshop1st Workshop on Computational Affective Science
Abbreviated titleCAS 2026
Country/TerritorySpain
CityPalma de Mallorca
Period16/05/2616/05/26

Bibliographical note

Not yet published as of 30/03/2026.

Keywords

  • Speech emotion recognition
  • deep learning
  • convolutional neural network (CNN)
  • Bidirectional long short-term memory (BLSTM)
  • Hybrid CNN–BiLSTM Mode
  • Log Mel-Frequency Spectral Coefficients (MFSC)

Fingerprint

Dive into the research topics of 'Speech-Based Emotion Recognition and Classification Integrating a CNN and BiLSTM Network'. Together they form a unique fingerprint.

Cite this