Emotion Recognition From Speech and Text using Long Short-Term Memory
Received: 2 May 2023 | Revised: 22 May 2023 | Accepted: 23 May 2023 | Online: 17 June 2023
Corresponding author: Siva Ramakrishna Jeevakala
Abstract
Everyday interactions depend on more than just rational discourse; they also depend on emotional reactions. Having this information is crucial to making any kind of practical or even rational decision, as it can help to better understand one another by sharing our responses and providing recommendations on how they may feel. Several studies have recently begun to focus on emotion detection and labeling, proposing different methods for organizing feelings and detecting emotions in speech. Determining how emotions are conveyed through speech has been given major emphasis in social interactions during the last decade. However, the real efficiency of identification needs to be improved because of the severe lack of data on the primary temporal link of the speech waveform. Currently, a new approach to speech recognition is recommended, which couples structured audio information with long-term neural networks to fully take advantage of the shift in emotional content across phases. In addition to time series characteristics, structural speech features taken from the waveforms are now in charge of maintaining the underlying connection between layers of the actual speech. There are several Long-Short-Term Memory (LSTM) based algorithms for identifying emotional focus over numerous blocks. The proposed method (i) reduced overhead by optimizing the standard forgetting gate, reducing the amount of required processing time, (ii) applied an attention mechanism to both the time and feature dimension in the LSTM's final output to get task-related information, rather than using the output from the prior iteration of the standard technique, and (iii) employed a powerful strategy to locate the spatial characteristics in the final output of the LSTM to gain information, as opposed to using the findings from the prior phase of the regular method. The proposed method achieved an overall classification accuracy of 96.81%.
Keywords:
MFCC, LSTM, emotion recognition, speech recognition, deep learningDownloads
References
Mustaqeem and S. Kwon, "A CNN-Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition," Sensors, vol. 20, no. 1, Jan. 2020, Art. no. 183.
A. M. Badshah et al., "Deep features-based speech emotion recognition for smart affective services," Multimedia Tools and Applications, vol. 78, no. 5, pp. 5571–5589, Mar. 2019.
R. A. Khalil, E. Jones, M. I. Babar, T. Jan, M. H. Zafar, and T. Alhussain, "Speech Emotion Recognition Using Deep Learning Techniques: A Review," IEEE Access, vol. 7, pp. 117327–117345, 2019.
K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition." arXiv, Apr. 10, 2015.
T. Hussain, K. Muhammad, A. Ullah, Z. Cao, S. W. Baik, and V. H. C. de Albuquerque, "Cloud-Assisted Multiview Video Summarization Using CNN and Bidirectional LSTM," IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 77–86, Jan. 2020.
B. Liu, H. Qin, Y. Gong, W. Ge, M. Xia, and L. Shi, "EERA-ASR: An Energy-Efficient Reconfigurable Architecture for Automatic Speech Recognition With Hybrid DNN and Approximate Computing," IEEE Access, vol. 6, pp. 52227–52237, 2018.
J. Huang, B. Chen, B. Yao, and W. He, "ECG Arrhythmia Classification Using STFT-Based Spectrogram and Convolutional Neural Network," IEEE Access, vol. 7, pp. 92871–92880, 2019.
E. Sucksmith, C. Allison, S. Baron-Cohen, B. Chakrabarti, and R. A. Hoekstra, "Empathy and emotion recognition in people with autism, first-degree relatives, and controls," Neuropsychologia, vol. 51, no. 1, pp. 98–105, Jan. 2013.
A. A. A. Zamil, S. Hasan, S. MD. Jannatul Baki, J. MD. Adam, and I. Zaman, "Emotion Detection from Speech Signals using Voting Mechanism on Classified Frames," in 2019 International Conference on Robotics,Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, Jan. 2019, pp. 281–285.
M. M. H. Milu, M. A. Rahman, M. A. Rashid, A. Kuwana, and H. Kobayashi, "Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals using Convolutional Neural Networks," Engineering, Technology & Applied Science Research, vol. 13, no. 2, pp. 10425–10431, Apr. 2023.
S. R. Jeevakala and H. Ramasangu, "Classification of Cognitive States using Task-Specific Connectivity Features," Engineering, Technology & Applied Science Research, vol. 13, no. 3, pp. 10675–10679, Jun. 2023.
N. A. Nguyen, T. N. Le, and H. M. V. Nguyen, "Multi-Goal Feature Selection Function in Binary Particle Swarm Optimization for Power System Stability Classification," Engineering, Technology & Applied Science Research, vol. 13, no. 2, pp. 10535–10540, Apr. 2023.
S. R. Bandela and T. K. Kumar, "Emotion Recognition of Stressed Speech Using Teager Energy and Linear Prediction Features," in 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT), Mumbai, India, Jul. 2018, pp. 422–425.
"Emotion Detection from Text." https://www.kaggle.com/datasets/pashupatigupta/emotion-detection-from-text.
Downloads
How to Cite
License
Copyright (c) 2023 Sonagiri China Venkateswarlu, Siva Ramakrishna Jeevakala, Naluguru Udaya Kumar, Pidugu Munaswamy, Dhanalaxmi Pendyala
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.