An Ensemble Kernelized-based Approach for Precise Emotion Recognition in Depressed People
Received: 21 August 2024 | Revised: 21 September 2024, 10 October 2024, and 24 October 2024 | Accepted: 26 October 2024 | Online: 2 December 2024
Corresponding author: Bidyutlata Sahoo
Abstract
As the COVID-19 pandemic created serious challenges for mental health worldwide, with a noticeable increase in depression cases, it has become important to quickly and accurately assess emotional states. Facial expression recognition technology is a key tool for this task. To address this need, this study proposes a new approach to emotion recognition using the Ensemble Kernelized Learning System (EKLS). Nonverbal cues, such as facial expressions, are crucial in showing emotional states. This study uses the Extended Cohn-Kanade (CK+) dataset, which was enhanced with images and videos from the COVID-19 era related to depression. Each of these images and videos is manually labeled with the corresponding emotions, creating a strong dataset for training and testing the proposed model. Facial feature detection techniques were used along with key facial measurements to aid in emotion recognition. EKLS is a flexible machine-learning framework that combines different techniques, including Support Vector Machines (SVMs), Self-Organizing Maps (SOMs), kernel methods, Random Forest (RF), and Gradient Boosting (GB). The ensemble model was thoroughly trained and fine-tuned to ensure high accuracy and consistency. EKLS is a powerful tool for real-time emotion recognition in both images and videos, achieving an impressive accuracy of 99.82%. This study offers a practical and effective approach to emotion recognition and makes a significant contribution to the field.
Keywords:
COVID-19, depression, facial emotion recognition, ensemble learning, EKLS, machine learning, mental healthDownloads
References
A. Gupta, V. Jain, and A. Singh, "Stacking Ensemble-Based Intelligent Machine Learning Model for Predicting Post-COVID-19 Complications," New Generation Computing, vol. 40, no. 4, pp. 987–1007, Dec. 2022.
H. M. Al-Dabbas, R. A. Azeez, and A. E. Ali, "Two Proposed Models for Face Recognition: Achieving High Accuracy and Speed with Artificial Intelligence," Engineering, Technology & Applied Science Research, vol. 14, no. 2, pp. 13706–13713, Apr. 2024.
R. Kumar, S. Mukherjee, T. M. Choi, and L. Dhamotharan, "Mining voices from self-expressed messages on social-media: Diagnostics of mental distress during COVID-19," Decision Support Systems, vol. 162, Nov. 2022, Art. no. 113792.
A. Khattak, M. Z. Asghar, M. Ali, and U. Batool, "An efficient deep learning technique for facial emotion recognition," Multimedia Tools and Applications, vol. 81, no. 2, pp. 1649–1683, Jan. 2022.
C. Zhang and L. Xue, "Autoencoder With Emotion Embedding for Speech Emotion Recognition," IEEE Access, vol. 9, pp. 51231–51241, 2021.
V. Ramachandra and H. Longacre, "Unmasking the psychology of recognizing emotions of people wearing masks: The role of empathizing, systemizing, and autistic traits," Personality and Individual Differences, vol. 185, Feb. 2022, Art. no. 111249.
B. Yang, J. Wu, and G. Hattori, "Facial expression recognition with the advent of human beings all behind face masks MUM2020," in Proceedings of the 2020 ACM International Conference on Multimedia (MUM2020), 2020.
A. Pise, H. Vadapalli, and I. Sanders, "Facial emotion recognition using temporal relational network: an application to E-learning," Multimedia Tools and Applications, vol. 81, no. 19, pp. 26633–26653, Aug. 2022.
S. Varma, M. Shinde, and S. S. Chavan, "Analysis of PCA and LDA Features for Facial Expression Recognition Using SVM and HMM Classifiers," in Techno-Societal 2018, 2020, pp. 109–119.
C. V. R. Reddy, U. S. Reddy, and K. V. K. Kishore, "Facial Emotion Recognition Using NLPCA and SVM," Traitement du Signal, vol. 36, no. 1, pp. 13–22, Apr. 2019.
M. Sajjad, M. Nasir, F. U. M. Ullah, K. Muhammad, A. K. Sangaiah, and S. W. Baik, "Raspberry Pi assisted facial expression recognition framework for smart security in law-enforcement services," Information Sciences, vol. 479, pp. 416–431, Apr. 2019.
P. V. Rouast, M. T. P. Adam, and R. Chiong, "Deep Learning for Human Affect Recognition: Insights and New Developments," IEEE Transactions on Affective Computing, vol. 12, no. 2, pp. 524–543, Apr. 2021.
D. K. Jain, P. Shamsolmoali, and P. Sehdev, "Extended deep neural network for facial emotion recognition," Pattern Recognition Letters, vol. 120, pp. 69–74, Apr. 2019.
Z. Yu, G. Liu, Q. Liu, and J. Deng, "Spatio-temporal convolutional features with nested LSTM for facial expression recognition," Neurocomputing, vol. 317, pp. 50–57, Nov. 2018.
J. Cai, O. Chang, X.-L. Tang, C. Xue, and C. Wei, "Facial Expression Recognition Method Based on Sparse Batch Normalization CNN," in 2018 37th Chinese Control Conference (CCC), Wuhan, China, Jul. 2018, pp. 9608–9613.
D. H. Kim, W. J. Baddar, J. Jang, and Y. M. Ro, "Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition," IEEE Transactions on Affective Computing, vol. 10, no. 2, pp. 223–236, Apr. 2019.
S. J. Park, B.-G. Kim, and N. Chilamkurti, "A Robust Facial Expression Recognition Algorithm Based on Multi-Rate Feature Fusion Scheme," Sensors, vol. 21, no. 21, Jan. 2021, Art. no. 6954.
S. A. Hussein, A. E. R. S. Bayoumi, and A. M. Soliman, "Automated detection of human mental disorder," Journal of Electrical Systems and Information Technology, vol. 10, no. 1, Feb. 2023, Art. no. 9.
T. D. Pham, M. T. Duong, Q. T. Ho, S. Lee, and M. C. Hong, "CNN-Based Facial Expression Recognition with Simultaneous Consideration of Inter-Class and Intra-Class Variations," Sensors, vol. 23, no. 24, Jan. 2023, Art. no. 9658.
S. Kanjanawattana, P. Kittichaiwatthana, K. Srivisut, and P. Praneetpholkrang, "Deep Learning-Based Emotion Recognition through Facial Expressions," Journal of Image and Graphics, pp. 140–145, Jun. 2023.
D. Hebri, R. Nuthakki, A. K. Digal, K. G. S. Venkatesan, S. Chawla, and C. R. Reddy, "Effective Facial Expression Recognition System Using Machine Learning," EAI Endorsed Transactions on Internet of Things, vol. 10, Mar. 2024.
A. B. Miled, M. A. Elhossiny, M. A. I. Elghazawy, A. F. A. Mahmoud, and F. A. Abdalla, "Enhanced Chaos Game Optimization for Multilevel Image Thresholding through Fitness Distance Balance Mechanism," Engineering, Technology & Applied Science Research, vol. 14, no. 4, pp. 14945–14955, Aug. 2024.
T. Kanade, J. F. Cohn, and Yingli Tian, "Comprehensive database for facial expression analysis," in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), Grenoble, France, 2000, pp. 46–53.
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression," in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, San Francisco, CA, USA, Jun. 2010, pp. 94–101.
Downloads
How to Cite
License
Copyright (c) 2024 Bidyutlata Sahoo, Arpita Gupta
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.