Explore chapters and articles related to this topic
Emotion Recognition for Human Machine Interaction
Published in Puneet Kumar, Vinod Kumar Jain, Dharminder Kumar, Artificial Intelligence and Global Society, 2021
The face is the most capable of expressing human emotion, and it plays a significant role in human emotion reflection and prediction. Facial Action Coding System (FACS) is the most comprehensive and automatic system for measuring the changes in facial expressions during emotions. These changes are tracked through automatic facial actions called as Action Units (AUs). These AUs help identify higher decision-making processes, including recognition of basic emotions [17]. This system has a 3D camera and Kinect device that capture facial images and form feature vectors with special coordinates of the face. Multilayer Perceptron neural network supports this system to recognize six basic emotions, achieving a 96% success rate [1]. The Viola Jones face detection method assists in building efficient emotion recognition system using k-NN classifier, gaining a 97% accuracy. Ada boost algorithm contributes to building an efficient classifier for facial expression to recognize emotion [16].
Face Expression Recognition using Side Length Features Induced by Landmark Triangulation
Published in Sourav De, Paramartha Dutta, Computational Intelligence for Human Action Recognition, 2020
Nandi Avishek, Dutta Paramartha, Nasir Md
Affective analysis is one of the important aspects of Human-Computer Interaction(HCI) in the field of emotion analysis [1]. Automatic analysis of human emotions has many real-life applications in medical, computational and security and surveillance, monitoring systems, companion robotics, and mood-based suggestion systems. Some studies have found that among many different forms of human emotions the one emanating from the face has the highest power to differentiate one emotion from another. Thus, expression of the face contributes a significant amount of characteristic information for automated recognition of human emotions. To classify the different types of emotions, some sort of grouping of emotions in distinct classes is necessary. For this purpose, Ekman et al. proposed six classes of atomic emotions with a substantial difference in appearance and emotional quality: Anger(AN), Fear(FE), Disgust(DI), Sadness(SA), Happiness(HA), and Surprise(SU) [2]. Later they found that these six expression classes have almost the same arousal and variance characteristics among different sets of people from different geographical locations [3]. These six basic expressions are uniquely identifiable because a single class of emotion emulates a unique combination of muscular movement hence generating classifiable feature cues corresponding to that particular class of emotion. Another important consideration towards automatic facial expression recognition is that the quality of the detected feature plays an important role in the segregation of facial expression in different classes. The quality of facial feature descriptors is mainly hindered due to variation in camera angle, lighting conditions, low image resolution, low visibility conditions, head rotation, face occultation, etc. The primary challenge of the affective computing community is to design an efficient and effective affect descriptor that is independent of such conditions mentioned earlier. The design of an effective facial feature descriptor is an essential criterion for affect classification in a dynamically changing environment. Facial Action Coding System (FACS) is one of such systems which describes movement of facial muscles to classify each facial expression and effectively associate it to its respective expression class [4]. FACS is a powerful feature descriptor but Action Unit (AU) detectors of FACS suffer from misclassification in the case of occultation and pose variation. In contrast to FACS system, the geometric based methods directly use location and shape information of relevant facial components such as eyes, eyebrows, nose, and mouth to extract the emotion-related features from the face induced by location points which are fed into a classifier without any intermediate stage of Action Unit (AU) detection.
The frequency of facial muscles engaged in expressing emotions in people with visual disabilities via cloud-based video communication
Published in Theoretical Issues in Ergonomics Science, 2023
The facial expressions were coded by using the Facial Action Coding System (FACS), that is, a comprehensive system to distinguish all possible visible anatomically based facial actions (Ekman and Friesen 1978; Ekman and Friesen 1976). The FACS helped to break down all visually discernible facial expressions into individual components of muscle movements, called Action Units. The Action Units cover various facial components (e.g., brows, eyelids, lips, head, nose, and cheeks). Statistical analyses were performed using the IBM SPSS Statistics for Macintosh, version 24 (IBM Corp 2016). The coding was accomplished by FACS coders who were trained based on the instructions of Cohn, Ambadar, and Ekman (2007) and competent with coding facial expressions using the FACS as shown in previous research (Kim and Sutharson 2022). The interrater reliability was also assessed using a subset (10%) of the total data (Geisler and Swarts 2019; Leclerc and Dassa 2009), e.g., a coder watched the sample of the facial expression videos and coded them for the presence of the FACS’ Action Units. The interrater reliability for the coders was found to be k = 0.66 (95% CI: 0.32 to 1.00), p < .05, which is a substantial agreement according to Landis and Koch (1977).
Micro Expression Recognition Using Delaunay Triangulation and Voronoi Tessellation
Published in IETE Journal of Research, 2022
Rashmi Adyapady R., B. Annappa
The FACS system designed by Paul Ekman helps to describe facial expressions by defining a set of atomic facial muscle actions known as AUs [39]. AUs play an essential role in finding the minute changes in facial movements and extracting meaningful feature information [14]. The presence of AUs and their combination together help discover the differences in muscle movements and further help to map into respective emotion classes. The facial actions describe the local variations on the face [10]. Each AU is coded based on the contraction of a single or group of muscles. Under distinct facial expressions, certain AUs show strong relationships [14]. FACS is an anatomical system that encodes various facial movements by combining distinct basic AUs, thus making the categories of emotions much wider. Since more attention needs to be paid to the research addressing the relationship between facial actions and emotion labels [8]. This work aims to examine the subtle facial feature difference that exists amongst the facial AUs, which could aid in classifying the expressions efficiently. Each AU contributes differently to the classification of facial expressions. Since it is tedious and challenging to generate facial representations for vast quantities of AUs [8], this work focuses on analyzing the AUs and finding the representations for AUs of only important facial regions that would strongly help for classification.
A survey on computer vision techniques for detecting facial features towards the early diagnosis of mild cognitive impairment in the elderly
Published in Systems Science & Control Engineering, 2019
Zixiang Fei, Erfu Yang, David Day-Uei Li, Stephen Butler, Winifred Ijomah, Huiyu Zhou
Careful selection and extraction of facial features is extremely important, if inadequate features are selected, then even the best classifier cannot provide satisfying recognition results (Shan, Gong, & McOwan, 2009). Facial features selection is related to facial muscle action units (AU). Combination of a set of action units can make up different emotions. Although it could be argued that there are potentially thousands of facial expressions, most of them differ only slightly. In terms of categorization, Ekman and Friesen proposed the Facial Action Coding System (FACS), which is the most widely used system for facial expressions analysis (Ekman & Rosenberg, 2005). Within the Facial Action Coding System, facial expressions are made up of 46 action units (AU), with action units related to specific sets of facial muscles.