Human Brain Judgment and Automated Classification of Masked Facial Expressions

  • Koji Kashihara Ritsumeikan University
  • Mizuki Shinguu Ritsumeikan University
Keywords: facial expressions, face masks, brain activity, classifiers

Abstract

We investigated brain activity in response to facial expressions wearing masks. N170 responses at the T5 and T6 sites were synchronized with the vertex positive potential (VPP) response at the Cz site. The N170 responses were increased under masked face conditions, which may be associated with amodal completion. We then tested the facial emotion recognizer (FER) as a general classifier and the specifically created classifiers based on convolutional neural networks (CNNs) for predicting masked facial expressions. Although the accuracies in the FER were greatly lower for Japanese faces with masks than without masks, the specific CNN classifier improved the accuracies under the masked conditions.

References

N. Mheidly, M. Y. Fares, H. Zalzale, and J. Fares, “Effect of face masks on inter-personal communication during the COVID-19 pandemic,” Frontiers in Public Health, 898, 2020.

J. Thielen, S. E. Bosch, T. M. van Leeuwen, M. A. van Gerven, and R. van Lier, “Neuroimaging findings on amodal completion: A review,” i-Perception, vol. 10(2), 2041669519840047, 2019.

R. J. Itier and M. J. Taylor, “Source analysis of the N170 to faces and objects,” Neuroreport, vol. 15(8), pp. 1261–1265, 2004.

W. Luo, W. Feng, W. He, N. Y. Wang, and Y. J. Luo, “Three stages of facial expres-sion processing: ERP study with rapid serial visual presentation,” Neuroimage, vol. 49(2), pp. 1857–1867, 2010.

J. K. Hietanen and P. Astikainen, “N170 response to facial expressions is modulated by the affective congruency between the emotional expression and preceding af-fective picture,” Biological Psychology, vol. 92(2), pp. 114–124, 2013.

J. A. Hinojosa, F. Mercado, and L. Carretié, “N170 sensitivity to facial expression: A meta-analysis,” Neuroscience & Biobehavioral Reviews, vol. 55, pp. 498–509, 2015.

S. Bentin, T. Allison, A. Puce, E. Perez, and G. McCarthy, “Electrophysiological studies of face perception in humans,” Journal of Cognitive Neuroscience, vol. 8(6), pp. 551–565, 1996.

J. Zhao, Q. Meng, L. An, and Y. Wang, “An event-related potential comparison of facial expression processing between cartoon and real faces,” PLoS One, vol. 14(1), e0198868, 2019.

H. T. Schupp, B. N. Cuthbert, M. M. Bradley, J. T. Cacioppo, T. Ito, and P. J. Lang, “Affective picture processing: the late positive potential is modulated by motiva-tional relevance,” Psychophysiology, vol. 37(2), pp. 257–261, 2000.

K. Kashihara, “Iris recognition for biometrics based on CNN with super-resolution GAN,” In 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), pp. 1–6, May 2020.

O. Arriaga, M. Valdenegro-Toro, and P. Plöger, “Real-time convolutional neural networks for emotion and gender classification,” arXiv preprint arXiv:1710.07557, 2017.

I. de Paz Centeno, “MTCNN,” GitHub, Jan. 2018. https://github.com/ipazc/mtcnn/

R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localiza-tion,” In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626, 2017.

M. Yuki, W.W. Maddux, and T. Masuda, “Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States,” Journal of Experimental Social Psychology, vol. 43(2), pp. 303–311, 2007.

S. Caharel, S. Poiroux, C. Bernard, F. Thibaut, R. Lalonde, and M. Rebai, “ERPs associated with familiarity and degree of familiarity during face recognition,” Int. J. Neurosci., vol. 112, pp. 1499–1512, 2002.

B. Rossion, S. Campanella, C.M. Gomez, A. Delinte, D. Debatisse, L. Liard et al., “Task modulation of brain activity related to familiar and unfamiliar face pro-cessing: an ERP study,” Clin. Neurophysiol., vol. 110, pp. 449–462, 1999.

K. Kashihara, “A brain-computer interface for potential non-verbal facial com-munication based on EEG signals related to specific emotions,” Frontiers in Neuroscience, vol. 8, 244, 2014.

A. Todorov, M.I. Gobbini, K.K. Evans, and J.V. Haxby, “Spontaneous retrieval of affective person knowledge in face perception,” Neuropsychologia, vol. 45, pp. 163–173, 2007.

C. Tallon-Baudry, O. Bertrand, C. Delpuech, and J. Pernier, “Stimulus-specificity of phase-locked and non phase-locked 40-Hz visual responses in human,” J. Neurosci., vol. 16, pp. 4240–4249, 1996.

K. Kashihara, “Automatic discrimination of task difficulty predicted by frontal EEG activity during working memory tasks in young and elderly drivers,” Inter-national Journal of Information Technology & Decision Making, pp. 1–43, 2022.

J.V. Haxby, E.A. Hoffman, and M.I. Gobbini, “The distributed human neural system for face perception,” Trends Cogn. Sci., vol. 4, pp. 223–233, 2000.

A. Sajjanhar, Z. Wu, and Q. Wen, “Deep learning models for facial expression recognition,” In 2018 Digital Image Computing: Techniques and Applications (dicta), pp. 1–6, IEEE, December 2018.

P. Ekman, “Basic emotions,” In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion, New York: Wiley, 1999.

K. Kashihara and M. Shinguu, “The judgment of masked facial expressions by humans and classifiers,” In 2022 12th International Congress on Advanced Ap-plied Informatics (IIAI-AAI), pp. 283–286, July 2022.

Published
2024-06-22
Section
Technical Papers