It is commonly assumed that there is a reliable one-to-one mapping between a certain configuration of facial movements and the specific emotional state that is supposedly signals. One common way to test this one-to-one hypothesis is to ask people to deliberately pose the facial configurations that they believe they use to express emotions. Participants are randomly sampled, without concern for their emotional expertise, and are given a single emotion word or a single, brief statement to describe each emotion category. They then deliberately pose the facial configuration that they believe they make when expressing instances of this category. Such studies routinely find that participants from different countries show moderate to strong evidence for a one-to-one mapping between an emotion category and a single facial configuration (its presumed facial expression). In Study 1, we examined the facial configurations posed by emotion experts - famous actors who were provided with a diverse sample of richly described scenarios, full of context. Participants inferred the emotional meaning of the scenarios, which were then grouped into categories. Systematic coding of the facial poses for each emotion category revealed little evidence for the hypothesis that each category has a diagnostic facial expression. Instead, we observed a high degree of variability among experts' facial poses for any given emotion category, and little specificity for any pose. Furthermore, an unsupervised statistical analysis discovered 29 novel emotion categories with moderately consistent facial poses. In Study 2, participants were asked to infer the emotional meaning of each facial pose when presented alone, or when presented in the context of its eliciting scenario. In fact, the majority of studies designed to test the one-to-one hypothesis ask people from various cultures to judge posed configurations of facial movements, such as a scowl (the proposed facial expression for anger), a frown (the proposed expression for sadness), and so on, on the assumption that these facial configurations, as universal expressions of emotional states, co-evolved with the ability to recognize and read them. These studies routinely show participants one facial configuration posed by multiple posers for each emotion category and observe variable findings, depending on the experimental method used. Our analyses indicated that participants‘inferences about the emotional meaning of the facial poses were influenced more by their eliciting scenarios than by the physical morphology of the facial configurations. These findings strongly replicate emerging evidence that the emotional meaning of any set of facial movements may be much more variable and context-dependent than hypothesized by the common one-to-one view which continues to influence the public understanding of emotion, and hence education, clinical practice, and applications in government and industry.
Although more ecologically valid research on how people actually move their faces to express emotion is urgently needed, doing so was immensely difficult without the right tools that support the process of capturing facial data in real life, automatically processing these data, and finally supporting data verification and analysis. We developed a system of technological tools to support the investigations of facial movements during emotional episodes in naturalistic settings with the use of dynamic and longitudinal facial data. We then collected, pre-processed, verified and analyzed data from Youtube using our newly-developed tools. In particular, we examined two talk show hosts and presented preliminary insights on the answers to questions that were previously very difficult to investigate.
This thesis and can be read here: https://www.dropbox.com/sh/7j2ly0wlhkyqd71/AAAoN-x77FPTWRrWWnUq4u4oa?dl=0