Abstract
Speech emotion recognition (SER) has gained significant interest in recent times, which utilizes speech sounds to identify the emotional state of speakers. Accurate recognition of subtle emotional variations in speech, such as distinguishing closely related emotional states, remains a challenging problem in speech emotion recognition (SER), due to the variability of speech signals and the acoustic similarity among emotion classes across different speakers and linguistic contexts. This paper proposes a hybrid deep learning model that integrates a Convolutional Neural Network (CNN) with a Bidirectional Long Short-Term Memory (BiLSTM) network to effectively identify both spectral and temporal features of speech. The log Mel-frequency spectral coefficients (MFSC) are used as input features to represent discriminative spectral representations, while the BiLSTM layer model represents long-range temporal dependencies in speech signals. The proposed framework is evaluated on the Toronto Emotional Speech Set (TESS), a publicly available dataset of acted emotional speech containing seven emotion classes. The experimental findings show that the hybrid CNN-BiLSTM achieved an overall classification accuracy of 96.36%, significantly outperforming baseline models including GRU (91.84%), BiLSTM (93.12%), and CNN–GRU (94.67%). These findings highlight the effectiveness of combining spectral and temporal modeling for improved speech emotion recognition performance. Furthermore, our CNN+BiLSTM approach offers a computationally efficient and data-efficient alternative to transformer-based models, while still effectively capturing both spatial and temporal emotional cues in speech, making it suitable for real-time and resource-constrained applications.
| Original language | English |
|---|---|
| Title of host publication | Language Resources and Evaluation Conference (LREC-2026) |
| Subtitle of host publication | Workshop on Computational Affective Science |
| Number of pages | 9 |
| Publication status | Published - 3 Mar 2026 |
| Event | 1st Workshop on Computational Affective Science - Palma de Mallorca, Spain Duration: 16 May 2026 → 16 May 2026 |
Workshop
| Workshop | 1st Workshop on Computational Affective Science |
|---|---|
| Abbreviated title | CAS 2026 |
| Country/Territory | Spain |
| City | Palma de Mallorca |
| Period | 16/05/26 → 16/05/26 |
Bibliographical note
Not yet published as of 30/03/2026.Keywords
- Speech emotion recognition
- deep learning
- convolutional neural network (CNN)
- Bidirectional long short-term memory (BLSTM)
- Hybrid CNN–BiLSTM Mode
- Log Mel-Frequency Spectral Coefficients (MFSC)
Fingerprint
Dive into the research topics of 'Speech-Based Emotion Recognition and Classification Integrating a CNN and BiLSTM Network'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver