Mental Health State Classification Using Facial Emotion Recognition and Detection

Adel Aref Ali Al-zanam (1), Omer Hussein Abdou Elsayed Hussein Alhomery (2), Choo Peng Tan (3)
(1) Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Bukit Beruang, Melaka, Malaysia
(2) Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Bukit Beruang, Melaka, Malaysia
(3) Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Bukit Beruang, Melaka, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Al-zanam, Adel Aref Ali, et al. “Mental Health State Classification Using Facial Emotion Recognition and Detection”. International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 6, Dec. 2023, pp. 2274-81, doi:10.18517/ijaseit.13.6.19055.
Analyzing and understanding emotion can help in various aspects, such as realizing one’s attitude, behavior, etc. By understanding one’s emotions, one's mental health state can be calculated, which can help in the medical field by classifying whether one is mentally stable or not. Facial Recognition is one of the many fields of computer vision that utilizes convolutional networks or Conv Nets to perform, train, and learn. Conv Nets and other machine learning algorithms have evolved to adapt better to larger datasets. One of the advancements in Conv Nets and machines is the introduction of various Conv architectures like VGGNet. Thus, this study will present a mental health state classification approach based on facial emotion recognition. The methodology comprises several interconnected components, including preprocessing, feature extraction using Principal Component Analysis (PCA) and VGGNet, and classification using Support Vector Machines (SVM) and Multilayer Perceptron (MLP). The FER2013 dataset tests multiple models’ performances, and the best model is employed in the mental health state classification. The best model, which combines Visual Geometry Group Network (VGGNet) feature extraction with SVM classification, achieved an accuracy of 66%, demonstrating the effectiveness of the proposed methodology. By leveraging facial emotion recognition and machine learning techniques, the study aims to develop an effective method.

S. Tivatansakul, M. Ohkura, S. Puangpontip, and T. Achalakul, “Emotional healthcare system: Emotion detection by facial expressions using Japanese database,” 2014 6th Computer Science and Electronic Engineering Conference (CEEC), Sep. 2014, doi:10.1109/ceec.2014.6958552.

M. Smith, R. S., “Improving Emotional Health,” May 15, 2014 [Online]. Available: http://www.helpguide.org/mental/ mental_emotional_health.html. [Assessed Jan. 5, 2023].

E. A. Elliott and A. M. Jacobs, “Facial Expressions, Emotions, and Sign Languages,” Frontiers in Psychology, vol. 4, 2013, doi:10.3389/fpsyg.2013.00115.

J. M. Garcia-Garcia, V. M. R. Penichet, and M. D. Lozano, “Emotion detection,” Proceedings of the XVIII International Conference on Human Computer Interaction, Sep. 2017, doi:10.1145/3123818.3123852.

S. Du, Y. Tao, and A. M. Martinez, “Compound facial expressions of emotion,” Proceedings of the National Academy of Sciences, vol. 111, no. 15, Mar. 2014, doi:10.1073/pnas.1322355111.

X. Chen, S. Sun, H. Li, Z. Ma, and K. Zhang, “Better than humans: a method for inferring consumer shopping intentions by reading facial expressions,” 2021 14th International Symposium on Computational Intelligence and Design (ISCID), Dec. 2021, doi:10.1109/iscid52796.2021.00041.

E.-M. Seidel, U. Habel, M. Kirschner, R. C. Gur, and B. Derntl, “The impact of facial emotional expressions on behavioral tendencies in women and men.,” Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 2, pp. 500–507, 2010, doi:10.1037/a0018169.

E. Back and T. R. Jordan, “Revealing Variations in Perception of Mental States from Dynamic Facial Expressions: A Cautionary Note,” PLoS ONE, vol. 9, no. 1, p. e84395, Jan. 2014, doi:10.1371/journal.pone.0084395.

B. Zhang, “Computer vision vs. human vision,” 9th IEEE International Conference on Cognitive Informatics (ICCI’10), Jul. 2010, doi:10.1109/coginf.2010.5599750.

R. Sathya and A. Abraham, “Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification,” International Journal of Advanced Research in Artificial Intelligence, vol. 2, no. 2, 2013, doi: 10.14569/ijarai.2013.020206.

P. Pandey, K. K. Dewangan, and D. K. Dewangan, “Enhancing the quality of satellite images by preprocessing and contrast enhancement,” 2017 International Conference on Communication and Signal Processing (ICCSP), Apr. 2017, doi:10.1109/iccsp.2017.8286525.

M. Huang, M. Zhuang, J. Zhou, and X. Wu, “Preprocessing Method Flow of Under-screen Fingerprint Image,” 2020 IEEE 14th International Conference on Anti-counterfeiting, Security, and Identification (ASID), Oct. 2020, doi:10.1109/asid50160.2020.9271693.

Priyanka C. Dighe, S. K. (2014). Survey on Image Resizing Techniques. International Journal of Science and Research (IJSR), 1444–1448. ISSN: 2319-7064

Surasak, T. T. (2018). Histogram of oriented gradients for human detection in video. Proceedings of 2018 5th International Conference on Business and Industrial Research: Smart Technology for Next Generation of Information, Engineering, Business and Social Science, 172–176. doi:10.1109/ICBIR.2018.8391187

L. Zhang, W. Zhou, J. Li, J. Li, and X. Lou, “Histogram of Oriented Gradients Feature Extraction Without Normalization,” 2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Dec. 2020, doi: 10.1109/apccas50809.2020.9301715.

M. Kolahdouzi, A. Sepas-Moghaddam, and A. Etemad, “Face Trees for Expression Recognition,” 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Dec. 2021, doi: 10.1109/fg52635.2021.9666986.

H. Zhao, Q. Wang, Z. Jia, Y. Chen, and J. Zhang, “Bayesian based Facial Expression Recognition Transformer Model in Uncertainty,” 2021 International Conference on Digital Society and Intelligent Systems (DSInS), Dec. 2021, doi: 10.1109/dsins54396.2021.9670628.

J. Li, B. Zhao, H. Zhang, and J. Jiao, “Face Recognition System Using SVM Classifier and Feature Extraction by PCA and LDA Combination,” 2009 International Conference on Computational Intelligence and Software Engineering, Dec. 2009, doi: 10.1109/cise.2009.5364125.

S. D. Fabiyi et al., “Comparative Study of PCA and LDA for Rice Seeds Quality Inspection,” 2019 IEEE AFRICON, Sep. 2019, doi: 10.1109/africon46755.2019.9134059.

F. Ayeche and A. Alti, “Face Recognition and Facial Expression Recognition Algorithm Based on Novel Features Extraction Descriptors HDG and HDGG,” 2021 International Conference on Information Systems and Advanced Technologies (ICISAT), Dec. 2021, doi: 10.1109/icisat54145.2021.9678425.

M. Ahmad et al., “Hyperspectral Image Classification—Traditional to Deep Models: A Survey for Future Prospects,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 968–999, 2022, doi: 10.1109/jstars.2021.3133021.

R. Gill and J. Singh, “A Deep Learning Approach for Real Time Facial Emotion Recognition,” 2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), Dec. 2021, doi: 10.1109/smart52563.2021.9676202.

A. Sultana, S. K. Dey, and Md. A. Rahman, “A Deep CNN based Kaggle Contest Winning Model to Recognize Real-Time Facial Expression,” 2021 5th International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Nov. 2021, doi: 10.1109/iceeict53905.2021.9667824.

S. Tegani and T. Abdelmoutia, “Using COVID-19 Masks Dataset to Implement Deep Convolutional Neural Networks For Facial Emotion Recognition,” 2021 4th International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Dec. 2021, doi: 10.1109/isaect53699.2021.9668345.

C.-Y. Chang, C.-W. Chang, J.-Y. Zheng, and P.-C. Chung, “Physiological emotion analysis using support vector regression,” Neurocomputing, vol. 122, pp. 79–87, Dec. 2013, doi: 10.1016/j.neucom.2013.02.041.

Y. Lim, K.-W. Ng, P. Naveen, and S.-C. Haw, “Emotion Recognition by Facial Expression and Voice: Review and Analysis,” Journal of Informatics and Web Engineering, vol. 1, no. 2, pp. 45–54, Sep. 2022, doi: 10.33093/jiwe.2022.1.2.4.

A. Agrawal and N. Mittal, “Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy,” The Visual Computer, vol. 36, no. 2, pp. 405–412, Jan. 2019, doi: 10.1007/s00371-019-01630-9.

G. Simcock et al., “Associations between Facial Emotion Recognition and Mental Health in Early Adolescence,” International Journal of Environmental Research and Public Health, vol. 17, no. 1, p. 330, Jan. 2020, doi: 10.3390/ijerph17010330.

Ashish Kumar Srivastav, D. V., “Interpolation (Definition, Formula) | Calculation with Examples,” March 26, 2023, Available: https://www.wallstreetmojo.com/interpolation/. [Assessed April 23, 2023].

T. Hastie, A. Montanari, S. Rosset, and R. J. Tibshirani, “Surprises in high-dimensional ridgeless least squares interpolation,” The Annals of Statistics, vol. 50, no. 2, Apr. 2022, doi: 10.1214/21-aos2133.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).