Emotion Recognition From Speech and Text using Long Short-Term Memory

Authors

  • Sonagiri China Venkateswarlu Dept. of Electronics and Communication Engineering, Institute of Aeronautical Engineering, India
  • Siva Ramakrishna Jeevakala Dept. of Electronics and Communication Engineering, Institute of Aeronautical Engineering, India
  • Naluguru Udaya Kumar Dept. of Electronics and Communication Engineering, Marri Laxman Reddy Institute of Technology and Management, India
  • Pidugu Munaswamy Dept. of Electronics and Communication Engineering, Institute of Aeronautical Engineering, India
  • Dhanalaxmi Pendyala Dept. of Electronics and Communication Engineering, Institute of Aeronautical Engineering, India
Volume: 13 | Issue: 4 | Pages: 11166-11169 | August 2023 | https://doi.org/10.48084/etasr.6004

Abstract

Everyday interactions depend on more than just rational discourse; they also depend on emotional reactions. Having this information is crucial to making any kind of practical or even rational decision, as it can help to better understand one another by sharing our responses and providing recommendations on how they may feel. Several studies have recently begun to focus on emotion detection and labeling, proposing different methods for organizing feelings and detecting emotions in speech. Determining how emotions are conveyed through speech has been given major emphasis in social interactions during the last decade. However, the real efficiency of identification needs to be improved because of the severe lack of data on the primary temporal link of the speech waveform. Currently, a new approach to speech recognition is recommended, which couples structured audio information with long-term neural networks to fully take advantage of the shift in emotional content across phases. In addition to time series characteristics, structural speech features taken from the waveforms are now in charge of maintaining the underlying connection between layers of the actual speech. There are several Long-Short-Term Memory (LSTM) based algorithms for identifying emotional focus over numerous blocks. The proposed method (i) reduced overhead by optimizing the standard forgetting gate, reducing the amount of required processing time, (ii) applied an attention mechanism to both the time and feature dimension in the LSTM's final output to get task-related information, rather than using the output from the prior iteration of the standard technique, and (iii) employed a powerful strategy to locate the spatial characteristics in the final output of the LSTM to gain information, as opposed to using the findings from the prior phase of the regular method. The proposed method achieved an overall classification accuracy of 96.81%.

Keywords:

MFCC, LSTM, emotion recognition, speech recognition, deep learning

Downloads

Download data is not yet available.

References

Mustaqeem and S. Kwon, "A CNN-Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition," Sensors, vol. 20, no. 1, Jan. 2020, Art. no. 183.

A. M. Badshah et al., "Deep features-based speech emotion recognition for smart affective services," Multimedia Tools and Applications, vol. 78, no. 5, pp. 5571–5589, Mar. 2019.

R. A. Khalil, E. Jones, M. I. Babar, T. Jan, M. H. Zafar, and T. Alhussain, "Speech Emotion Recognition Using Deep Learning Techniques: A Review," IEEE Access, vol. 7, pp. 117327–117345, 2019.

K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition." arXiv, Apr. 10, 2015.

T. Hussain, K. Muhammad, A. Ullah, Z. Cao, S. W. Baik, and V. H. C. de Albuquerque, "Cloud-Assisted Multiview Video Summarization Using CNN and Bidirectional LSTM," IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 77–86, Jan. 2020.

B. Liu, H. Qin, Y. Gong, W. Ge, M. Xia, and L. Shi, "EERA-ASR: An Energy-Efficient Reconfigurable Architecture for Automatic Speech Recognition With Hybrid DNN and Approximate Computing," IEEE Access, vol. 6, pp. 52227–52237, 2018.

J. Huang, B. Chen, B. Yao, and W. He, "ECG Arrhythmia Classification Using STFT-Based Spectrogram and Convolutional Neural Network," IEEE Access, vol. 7, pp. 92871–92880, 2019.

E. Sucksmith, C. Allison, S. Baron-Cohen, B. Chakrabarti, and R. A. Hoekstra, "Empathy and emotion recognition in people with autism, first-degree relatives, and controls," Neuropsychologia, vol. 51, no. 1, pp. 98–105, Jan. 2013.

A. A. A. Zamil, S. Hasan, S. MD. Jannatul Baki, J. MD. Adam, and I. Zaman, "Emotion Detection from Speech Signals using Voting Mechanism on Classified Frames," in 2019 International Conference on Robotics,Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, Jan. 2019, pp. 281–285.

M. M. H. Milu, M. A. Rahman, M. A. Rashid, A. Kuwana, and H. Kobayashi, "Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals using Convolutional Neural Networks," Engineering, Technology & Applied Science Research, vol. 13, no. 2, pp. 10425–10431, Apr. 2023.

S. R. Jeevakala and H. Ramasangu, "Classification of Cognitive States using Task-Specific Connectivity Features," Engineering, Technology & Applied Science Research, vol. 13, no. 3, pp. 10675–10679, Jun. 2023.

N. A. Nguyen, T. N. Le, and H. M. V. Nguyen, "Multi-Goal Feature Selection Function in Binary Particle Swarm Optimization for Power System Stability Classification," Engineering, Technology & Applied Science Research, vol. 13, no. 2, pp. 10535–10540, Apr. 2023.

S. R. Bandela and T. K. Kumar, "Emotion Recognition of Stressed Speech Using Teager Energy and Linear Prediction Features," in 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT), Mumbai, India, Jul. 2018, pp. 422–425.

"Emotion Detection from Text." https://www.kaggle.com/datasets/pashupatigupta/emotion-detection-from-text.

Downloads

How to Cite

[1]
Venkateswarlu, S.C., Jeevakala, S.R., Kumar, N.U., Munaswamy, P. and Pendyala, D. 2023. Emotion Recognition From Speech and Text using Long Short-Term Memory. Engineering, Technology & Applied Science Research. 13, 4 (Aug. 2023), 11166–11169. DOI:https://doi.org/10.48084/etasr.6004.

Metrics

Abstract Views: 690
PDF Downloads: 499

Metrics Information

Most read articles by the same author(s)