[eng] The recognition of facial expressions is a topic frequently addressed
since the raising of Artificial Intelligence. From the point of view of
the human-computer interaction (HCI), there is also an increasing
interest as the emotions conveyed through expressions provide a
huge amount of information when dealing with non-verbal communication.
Nowadays, neural networks are one of the most used
computational learning systems to recognize and analyze emotions.
Generally, the main efforts are led to increase the performance
of machine learning (ML) models in terms of high accuracy. But,
actually, humans are not so good at distinguishing emotions. In this
work, we rise the question of whether the validation of such models
should rely only on performance measures or if we should also
focus on trying to emulate human behavior.We try to give a fair answer
performing two experiments using both, human participants
and machine learning techniques.