A Real-Time Deep Learning Framework for Classroom Facial Expression Recognition: Performance Optimization and Model Evaluation
Corresponding author: Shahrulniza Musa
Abstract
Facial expressions indicate a person’s affective state and can be a significant determinant of cognitive performance. This study proposes a Facial Expression Recognition System (FERS) to detect and analyze students’ real-time emotions, thereby providing teachers with insights. The proposed system was trained and evaluated using eight pretrained models on the CK+ dataset. A comparative analysis indicates that the Xception model achieved the highest accuracy in emotion classification. To improve model performance, the grayscale images in the CK+ dataset were enhanced and used as input to an Xception-based Convolutional Neural Network (CNN), employing 3×3 Conv2D filters with ReLU activation and same-padded layer feature extraction. The model demonstrated excellent performance, achieving an accuracy of 99.34%, a precision of 90%, a recall of 87%, and an F1-score of 88%, confirming the system's reliability and efficiency, applying macro averaging. In conclusion, the proposed Xception-based CNN FERS can accurately recognize students’ emotions, allowing teachers to monitor students' moods in the classroom.
Keywords:
facial emotion recognition, pretrained models, deep learning, Xception, transfer learningDownloads
References
W. Strielkowski, V. Grebennikova, A. Lisovskiy, G. Rakhimova, and T. Vasileva, "AI-Driven Adaptive Learning for Sustainable Educational Transformation," Sustainable Development, vol. 33, no. 2, pp. 1921–1947, Apr. 2025.
A. A. Salah and A. El Ali, "Affective User Interfaces," in Handbook of Human Computer Interaction, J. Vanderdonckt, P. Palanque, and M. Winckler, Eds. Cham: Springer Nature Switzerland, 2025, pp. 1–32.
M. Carrasco, C. González-Martín, S. Navajas-Torrente, and R. Dastres, "Level of Agreement between Emotions Generated by Artificial Intelligence and Human Evaluation: A Methodological Proposal," Electronics, vol. 13, no. 20, Oct. 2024, Art. no. 4014.
M. J. A. Dujaili, "Survey on Facial Expressions Recognition: Databases, Features and Classification Schemes," Multimedia Tools and Applications, vol. 83, no. 3, pp. 7457–7478, Jan. 2024.
S. D. Qasim, Beyond the Classroom: Emerging Technologies to Enhance Learning. Book Bazooka Publication, 2024.
A. McIntosh, "Teachers Perception of Student Engagement Using Face-to-Face and Remote Instructional Methods," PhD Dissertation, St. John’s University, New York City, NY, USA, 2025.
M. Kaur and M. Kumar, "Facial Emotion recognition: A comprehensive review," Expert Systems, vol. 41, no. 10, Oct. 2024, Art. no. e13670.
K. Kusano, J. L. Napier, and J. Jost, "The Mismeasure of Culture: When Measurement Invariance Requirements Hinder Social-Psychological Research." PsyArXiv, Mar. 28, 2024.
H. L. Gururaj, B. C. Soundarya, S. Priya, J. Shreyas, and F. Flammini, "A Comprehensive Review of Face Recognition Techniques, Trends, and Challenges," IEEE Access, vol. 12, pp. 107903–107926, 2024.
A. Caruso et al., "Adaptive 360° Video Streaming over a Federated 6G Network: Experimenting In-Network Computing for Enhanced User Experience," in 2024 20th International Conference on Network and Service Management, Prague, Czech Republic, Oct. 2024, pp. 1–7.
N. M. Alruwais and M. Zakariah, "Student Recognition and Activity Monitoring in E-Classes Using Deep Learning in Higher Education," IEEE Access, vol. 12, pp. 66110–66128, 2024.
K. Malta, C. Glickman, K. Hunter, and A. McBride, "Comparing the Impact of Online and In-Person Active Learning in Preclinical Medical Education," BMC Medical Education, vol. 25, no. 1, Mar. 2025, Art. no. 329.
B. Fang, X. Li, G. Han, and J. He, "Facial Expression Recognition in Educational Research from the Perspective of Machine Learning: A Systematic Review," IEEE Access, vol. 11, pp. 112060–112074, 2023.
M. B. Govind, A. S. Humaid, and G. Malu, "Revolutionizing Student Engagement: Real-Time Emotion Detection and Interest Identification in Live Video Streams," in Fifth International Conference on Computing and Network Communications, vol. 1221, S. M. Thampi, V. Chaudhary, A.-S. K. Pathan, K. Ching Li, and D. Krishnaswamy, Eds. Singapore: Springer Nature Singapore, 2025, pp. 409–422.
C. R. Tirkey, "AI-Driven Real-Time Student Monitoring System for Enhancing Engagement, Learning, and Identifying Support Needs in K-12 Classrooms Through Visual Programming," M. S. Thesis, Kent State University, Kent, Ohio, 2025.
A. Imran, R. Ahmed, M. M. Hasan, M. H. U. Ahmed, A. K. M. Azad, and S. A. Alyami, "FaceEngine: A Tracking-Based Framework for Real-Time Face Recognition in Video Surveillance System," SN Computer Science, vol. 5, no. 5, May 2024, Art. no. 609.
R. Grover and S. Bansal, "Optimizing Facial Expression Recognition in Challenging Environment: A Streamlined CNN with Pre-Processing Techniques," Journal of The Institution of Engineers (India): Series B, vol. 106, no. 4, pp. 1329–1348, Aug. 2025.
I. D. Mienye and T. G. Swart, "A Comprehensive Review of Deep Learning: Architectures, Recent Advances, and Applications," Information, vol. 15, no. 12, Nov. 2024, Art. no. 755.
S. Alok, "CK+ Dataset." Kaggle, Apr. 2025, [Online]. Available: https://www.kaggle.com/datasets/shuvoalok/ck-dataset.
T. E. Köksal, "Deep Learning Based Real-Time Sequential Facial Expression Analysis Using Geometric Features," M. S. Thesis, İzmir Institute of Technology, Urla, Türkiye, 2023.
J. A. Ballesteros, G. M. Ramírez., F. Moreira, A. Solano, and C. A. Pelaez, "Facial Emotion Recognition Through Artificial Intelligence," Frontiers in Computer Science, vol. 6, Jan. 2024, Art. no. 1359471.
Downloads
How to Cite
License
Copyright (c) 2026 Shardha Nand, Siti Haryani Shaikh Ali, Shahrulniza Musa, Mazliham Mohd Su’ud

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.
