Explore chapters and articles related to this topic
Face Recognition
Published in Yu-Jin Zhang, A Selection of Image Analysis Techniques, 2023
Face detection/tracking is the first step to complete the face recognition work. The purpose of this step is to accurately, efficiently and reliably determine the face and give its position in the input image. Face detection/tracking is also called face positioning, but the narrower sense of face positioning often refers to a specific situation of a single face. It assumes that there is only one face in the image, but the background is often more complicated, which is often encountered in human-computer interaction, etc. Some people also define face detection as, given an image, the goal of face detection is to determine all image regions that contain faces, regardless of the person's position, orientation, and imaging lighting conditions in 3-D space (Yang et al. 2002).
Thermal Imaging in Detection of Fever for Infectious Diseases
Published in U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer, Artificial Intelligence-Based Infrared Thermal Image Processing and Its Applications, 2023
U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer
Face detection from a video feed or from a stream of images is an increasingly used application in many fields such as crowd monitoring, security surveillance, crowd flow prediction, and face recognition. Very recently, face detection from thermal imagery is used in identifying febrile subjects based on EBT from the eye regions. In this section, we discuss one of the widely used methods of face detection which was originally proposed by Viola and Jones in 2001 (Viola and Jones, n.d.) and further researched by authors such as Wang (2014) and Huang, Shang, and Chen (2019). Due to its computational efficiency and speed of operation, it is one of the widely used methods for face detection from images and is also used in face detection in video signal. Figure 7.7 shows the key components that make up the algorithm for face detection from input images.
High Security for Bank Locker using Facial Recognition
Published in P. C. Thomas, Vishal John Mathai, Geevarghese Titus, Emerging Technologies for Sustainability, 2020
Therese Yamuna Mahesh, Diya Achu Pradeep, Renuka Pradeep, K. S. Silpa, Sreyas Dileep
In the literature survey there are various methods for providing authentication at various areas. [1] Finding facial features in images is a very important step in applications like tracking the eye, recognition of face, face expression recognition, etc. In the paper by Bhavana K Pancholi, a method is developed for detecting face from the live image. But face detection is one of the most complex and challenging problem in the field of computer vision, due to the large variations caused by the changes in facial appearance, lighting, and facial expression. The face is detected from whole image using viola jones algorithm which was proposed by Paul Viola and Michael Jones. Then Haar feature based Adaboost algorithm is used to extract the facial region from the image. Cascading of stages is used to make the process faster. In the real time applications like surveillance and biometric, the camera limitations and pose variations make the distribution of human faces in feature space more complicated than that of frontal faces. It further complicates the problem of robust face detection. For detection, Viola jones’s algorithm is used. As we know there is some similarities in all human faces, so a Haar concept is used to detect face in image. Algorithm looks for specific Haar feature of a face. If the feature is found, the d algorithm passes the candidate to the next stage. Here the candidate’s whole image is not taken. A rectangular part of this image known as sub window which has a size of 24×24 pixels is taken. With this window the algorithm checks the whole image [1].
Enhanced Facial Emotion Recognition by Optimal Descriptor Selection with Neural Network
Published in IETE Journal of Research, 2023
P. M. Ashok Kumar, Jeevan Babu Maddala, K. Martin Sagayam
The developed model includes the listed processes like, “(a) Image pre-processing, (b) Extraction of facial components, (c) optimal descriptor selection, and (d) classification”. Initially, the image pre-processing is performed for the input facial images using two processes like face detection and noise filtering. Here, Viola–Jones algorithm is used for the face detection, and the noise removal is performed by Gabor Filtering. Viola-Jones algorithm is a popular mechanism for face detection, because of its effective detection rate and its flexibility. Next, Gabor filtering is applied for noise reduction. It’s mainly used for image pattern recognition, and image edge detection. Once the pre-processing of facial images is over, feature extraction is done using the improved version of SIFT called ASIFT. The descriptors generated from ASIFT are high, and so the number of descriptors is reduced by the optimal descriptor selection process. Here, the combinations of two optimization algorithms like WOA and MVO are adopted, and the new algorithm termed as MV-WOA is used to perform the optimal descriptor selection. The selected descriptors are considered as features, which are subjected to NN. As an improvement, the number of hidden neurons of NN is optimized by the same MV-WOA. The optimal descriptor selection and classification is accomplished for maximizing the classification accuracy while performing FER. Thus, the seven facial emotions like “fear, anger, disgust, happy, sad, neutral and surprise” are categorized for the input test image.
Improved deep generative adversarial network with illuminant invariant local binary pattern features for facial expression recognition
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Priyanka A. Gavade, Vandana S. Bhat, Jagadeesh Pujari
Facial expression is a usual non-verbal channel in order to converse emotions for humans. This paper is highly concentrated on modelling the FER approach by exploiting the Taylor-CSO-based-Deep GAN. In the adopted model, the steps concerned regarding video frames extraction, pre-processing, detection of face, feature extraction, as well as FER phase. From the dataset, the input video sequences are gathered, and then that is permitted to the frame extraction process step, in which the video sequences are exploited to extract the video frames. By exploiting the ROI-enabled extraction, the pre-processing phase is performed. The Viola-Jones algorithm is employed for face detection. By modifying the LBP descriptor, the feature extraction stage is performed using the IILBP feature that is derived. For recognition, the Deep GAN (Usman et al. 2020) is used that is tuned using the adopted Taylor-CSO model. Nevertheless, adopted Taylor-CSO is combination of Taylor series (Mangai et al. 2014) as well as CSO (Meng et al. 2014; Subramanyam et al. 2019; Kumar and Vimala 2020). Figure 1 exhibits the system model of the newly adopted FER system.
A practical implementation of mask detection for COVID-19 using face detection and histogram of oriented gradients
Published in Australian Journal of Electrical and Electronics Engineering, 2022
Salim Chelbi, Abdenour Mekhmoukh
Numerous implementations of Face Mask Detection systems came forth after proving that wearing face masks help in minimising the spreading rate of COVID-19 by researchers and scientists. In Qin and Li 2020, the authors classified three categories of face mask-wearing conditions, which are no facemaskwearing, correct facemaskwearing and facemaskwearing. The result achieved an accuracy of 98.70% in the face detection phase. Sabir et al. (Ejaz et al. 2019) have performed facial recognition on masked and unmasked faces by applying the Principal Component Analysis (PCA) However, the recognition accuracy drops to less than 70% when the recognised face is masked. In Park et al. 2005, the authors also used the PCA and they proposed a method that is used for removing glasses from a human frontal facial image.