Performance Evaluation of Learning Classifiers of Children Emotions using Feature Combinations in the Presence of Noise
Recognition of emotion-based utterances from speech has been produced in a number of languages and utilized in various applications. This paper makes use of the spoken utterances corpus recorded in Urdu with different emotions of normal and special children. In this paper, the performance of learning classifiers is evaluated with prosodic and spectral features. At the same time, their combinations considering children with autism spectrum disorder (ASD) as noise in terms of classification accuracy has also been discussed. The experimental results reveal that the prosodic features show significant classification accuracy in comparison with the spectral features for ASD children with different classifiers, whereas combinations of prosodic features show substantial accuracy for ASD children with J48 and rotation forest classifiers. Pitch and formant express considerable classification accuracy with MFCC and LPCC for special (ASD) children with different classifiers.
Keywords:spoken utterances, special children, learning classifiers, noise, features
S. Ramakrishnan, “Recognition of emotion from speech: A review”, in: Speech enhancement, modeling and recognition algorithms and applications, pp. 121-137, InTech, 2012 DOI: https://doi.org/10.5772/39246
E. Lyakso, O. Frolova, E. Dmitrieva, A. Grigorev, H. Kaya, A. A. Salah, A. Karpov, “EmoChildRu: Emotional child russian speech corpus”, Lecture Notes in Computer Science, Vol. 9319, Springer, Cham, 2015 DOI: https://doi.org/10.1007/978-3-319-23132-7_18
S. Dewan, A. Singh, L. Singh, S. Gautam, “Role of emotion recognition in computive assistive learning for autistic person”, Indian Journal of Science and Technology, Vol. 9, No. 48, 2016 DOI: https://doi.org/10.17485/ijst/2016/v9i48/105991
O. Golan, Y. Sinai-Gavrilov, S. Baron-Cohen, “The Cambridge mindreading face-voice battery for children (CAM-C) complex emotion recognition in children with and without autism spectrum conditions”, Molecular Autism, Vol. 6, No. 1, Article ID 22, 2015 DOI: https://doi.org/10.1186/s13229-015-0018-z
R. Arunachalam, Revathi, “A strategic approach to recognize the speech of the children with hearing impairment different sets of features and models”, in: Multimedia Tools and Applications, Springer, 2019 DOI: https://doi.org/10.1007/s11042-019-7329-6
S. A. Yoon, G. Son, S. Kwon, “Fear emotion classification in speech by acoustic and behavioral cues”, Multimedia Tools and Applications, Vol. 78, No. 2, pp. 2345-2366, 2019 DOI: https://doi.org/10.1007/s11042-018-6329-2
S. Khan, S. A. Ali, J. Sallar, “Analysis of children’s prosodic features using emotion based utterances in Urdu language”, Engineering, Technology & Applied Science Research, Vol. 8, No. 3, pp. 2954-2957, 2018 DOI: https://doi.org/10.48084/etasr.1902
A. Marczewski, A. Veloso, N. Ziviani, “Learning transferable features for speech emotion recognition”, Thematic Workshops of ACM Multimedia, Mountain View, USA, October 23-27, 2017 DOI: https://doi.org/10.1145/3126686.3126735
M. F. Alghifari, T. S. Gunawan, M. Kartiwi, “Speech emotion recognition using deep feedforward neural network”, Indonesian Journal of Electrical Engineering and Computer Science, Vol. 10, No. 2, pp. 554-561, 2018 DOI: https://doi.org/10.11591/ijeecs.v10.i2.pp554-561
A. Rouhi, M. Spitale, F. Catania, G. Cosentino, M. Gelsomini, F. Garzotto, “Emotify: emotional game for children with autism spectrum disorder based-on machine learning”, 24th International Conference on Intelligent User Interfaces Companion, New York, USA, March 16-20, 2019 DOI: https://doi.org/10.1145/3308557.3308688
K. S. Rao, S. G. Koolagudi, Emotion recognition using speech features, Springer, 2013 DOI: https://doi.org/10.1007/978-1-4614-5143-3
P. Shen, C. Zhou, X. Chen, “Automatic speech emotion recognition using support vector machine”, International Conference on Electronic, Mechanical Engineering and Information Technology, Harbin, China, August 12-14, IEEE 2011 DOI: https://doi.org/10.1109/EMEIT.2011.6023178
A. S. Utane, S. L. Nalbalwar, “Emotion recognition through speech”, 2nd National Conference On Innovative Paradigms in Engineering & Technology, Nagpur, Maharashtra, India, February 17, 2013
S. A. Ali, S. Zehra, M. Khan, F. Wahab, “Development and analysis of speech emotion corpus using prosodic features for cross linguistic”, International Journal of Scientific & Engineering Research, Vol. 4, No. 1, pp. 1-8, 2013
S. A. Ali, A. Khan, N. Bashir, “Analyzing the impact of prosodic feature (pitch) on learning classifiers for speech emotion corpus”, International Journal of Information Technology and Computer Science, Vol. 2, pp. 54-59, 2015 DOI: https://doi.org/10.5815/ijitcs.2015.02.07
M. Swain, A. Routray, P. Kabisatpathy, “Databases features and classifiers for speech emotion recognition: a review”, International Journal of Speech Technology, Vol. 21, No. 1, pp. 93-120, 2018 DOI: https://doi.org/10.1007/s10772-018-9491-z
How to Cite
MetricsAbstract Views: 627
PDF Downloads: 336
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.