Revista ELECTRO
Vol. 46 – Año 2024
Artículo
TÍTULO
Reconocimiento de Emociones Mediante Imágenes de Expresiones Faciales Basado en Arquitectura de Red Siamesa
AUTORES
Amparán-Ortega, N.C.; Corral-Sáenz, A.D.; Ramírez-Quintana, J.A.
RESUMEN
El inter és por estado emocional de una persona se ha incrementado en gran medida en el sector salud debido a las consecuencias que puede tener una situación de un estado emocional deteriorado no atendido. Por lo tanto, en este trabajo se propone un modelo de red profunda con arquitectura siamesa para el reconocimiento de emociones humanas por medio de imágenes, y que sea capaz de funcionar sobre un sistema embebido de manera rápida, portable y sin complicaciones de incompatibilidad de paquetes o una pobre optimizació n. Para desarrollar el modelo final de red siamesa se realizó una etapa previa llevando a cabo entrenamientos con pares de emociones que siguieran estas características: que fueran similares entre sí, que fueran medianamente similares, y que fueran distint as. El número de emociones se fue incrementando hasta que el modelo mantuviera una precisión general por arriba del 70%. Esta restricción se logró alcanzar con 5 emociones. El modelo final resultó tener un desempeño de 83.83% de precisión.
Palabras Clave: Clasificación de emociones, red siamesa, red convolucional.
ABSTRACT
The interest in a person's emotional state has significantly increased in the health sector due to the consequences that can arise from an unattended deteriorated emotional state. Then, this paper proposes a deep network model with a Siamese architecture for recognizing human emotions through face images, which can operate on an embedded system quickly, portably, and without complications related to package incompatibility or poor optimi zation. The development of the Siamese network model is based on a preliminary stage conducted by training with pairs of emotions that follow these characteristics: being similar to each other, being moderately similar, and being distinct. The number of em otions was gradually increased until the model maintained an overall accuracy above 70%. This threshold was achieved with five emotions. The final model resulted in a performance of 83.83% accuracy.
Keywords: Emotion classification, Siamese network, convol utional network.
REFERENCIAS
[1] S. Mo, J. Niu, S. Y. y S. Das, A novel feature set for video emotion recognition. Neurocomputing,., vol. 291, pp. 11-20, 2018. https://doi.org/10.1016/j.neucom.2018.02.052
[2] J. Yan, Z. W. Z. Cui, T. C. Z. T. y Y. Zong, Multi-cue fusion for emotion recognition in the wild. Neurocomputing,, vol. 309, pp..27-3, 2018. https://doi.org/10.1016/j.neucom.2018.03.068
[3] N. Jain, S. Kumar, A. Kumar, P. Shamsolmoali y M. Zareapoor, Hybrid deep neural networks for face emotion recognition. Pattern Recognition Letters, vol. 115, p p. 101-106, 2018. https://doi.org/10.1016/j.patrec.2018.04.010
[4] R. Favarett, P. K. S. Musse, F. Vilanova y Â. Costa, Detecting personality and emotion traits in crowds from video sequences. Machine Vision and Applications, vol. 30, no 5, pp. 999-1012, 2018. https://doi.org/10.1007/s00138-018-0979-y
[5] S. Kumar, M. Bhuyan, B. Lovell y Y. Iwahori, Hierarchical uncorrelated multiview discriminant locality preserving projection for multiview facial expression recognition. Journal of Visual Communicat ion and Image Representation, vol. 54, pp. 171-181, 2018. https://doi.org/10.1016/j.jvcir.2018.04.013
[6] M. Al, A. Mosa, F. Machot y K. Kyamakya, Emotion Recognition Involving Physiological and Speech Signals: A Comprehensive Review. Studies in Systems, Deci sion and Control, vol. 109, pp. 287-302, 2017. https://doi.org/10.1007/978-3-319-58996-1_13
[7] P. Tarnowski, M. Kołodziej, M. A. y R. Rak, Emotion recognition using facial expressions. Procedia Computer Science, vol. 108, pp. 1175-1184, 2017, vol. 108, pp. 11 75-1184, 2017. https://doi.org/10.1016/j.procs.2017.05.025
[8] J. Yan, Z. W. Z. Cui, C. Tang, T. Zhang y Y. Zong, Multi-cue fusion for emotion recognition in the wild. Neurocomputing, vol. 309, pp. 27-35, 2018. https://doi.org/10.1016/j.neucom.2018.03.068
[9] Z. Yu, G. Liu, Q. Liu y J. Deng, Spatio-temporal convolutional features with nested LSTM for facial expression recognition. Neurocomputing, vol. 317, pp. 50-57, 2018. https://doi.org/10.1016/j.neucom.2018.07.028
[10] N. Jain, S. Kumar, A. Kumar, P. Shamsolmoali y M. Zareapoor, Hybrid deep neural networks for face emotion recognition. Pattern Recognition Letters, vol. 115, pp. 101-106, 2018. https://doi.org/10.1016/j.patrec.2018.04.010
[11] G. I. et al, Challenges in Representation Learning: A Report on Three Machine Le arning Contests. de Lecture Notes in Computer Science, Berlin, Heidelber, Springer, 2013., pp. 117-124. https://doi.org/10.1007/978-3-642-42051-1_16
[12] E. Lundqvist, F. D. y A. Öhman, The Karolinska Directed Emotional Faces – KDEF. Department of Clinical Neur oscience, Psychology section, Karolinska Institutet, Suecia, 1998.
[13] Y. Gao y et al, Calculating Color Differences of Images via Siamese Neural Network. de 2024 IEEE International Symposium on Circuits and Systems (ISCAS), Singapore, Singapore, 2024.
[14] R. Vachhani, S. Mandal y B. Gohe, Low-Resolution Face Recognition Using Multi-Stream CNN in Siamese Framewor. de 2023 Seventh International Conference on Image Information Processing (ICIIP), Solan, India, 2023. 10.1109/ICIIP61524.2023.10537740
[15] S. Ravichandiran, Hands-On Deep Learning Algorithms with Python. 1st ed. Birmingham: Packt, 2019, p. 437-456
CITAR COMO:
Amparán-Ortega, N.C.; Corral-Sáenz, A.D.; Ramírez-Quintana, J.A., "Reconocimiento de Emociones Mediante Imágenes de Expresiones Faciales Basado en Arquitectura de Red Siamesa", Revista ELECTRO, Vol. 46, 2024, pp. 351-356.
VERSIÓN PDF