Explore chapters and articles related to this topic
Deep Face Recognition
Published in Hassan Ugail, Deep Learning in Visual Computing, 2022
Principal Component Analysis (PCA), was introduced in 1905 and is an unsupervised machine learning model widely used for reducing dimensionality and image compression. It is a technique for coding large faces into the “face space”. In 1991, Eigenface was applied to construct face recognition algorithms Eigenfaces extract the most important details known as “eigenvectors” from the faces which correspond to the maximum eigenvalues, representing the variations. Essentially, when it comes to face recognition, the concept of an average face is interesting, and in some ways, it appears to “mimic” human face recognition. Attempts to use the average face as a tool for computer-based face recognition seem to be plentiful in the published literature. Another commonly used technique is the Local Binary Patterns (LBP) and is a simple yet powerful feature for texture classification. This approach divides an image into different regions and extracts features from each region separately, and those features are subsequently used for classification.
Applications of Computer Vision
Published in Manas Kamal Bhuyan, Computer Vision and Image Processing, 2019
The eigenface representation technique for face recognition is based on PCA. The concept of PCA has been already discussed in Chapters 2 and 4. Principal components of the distribution of faces, or the eigenvectors of the covariance matrix of a set of face images can be found out. These eigenvectors can be displayed as a face image, which are called eigenfaces. Each face image can be decomposed into a set of eigenfaces. These eigenfaces are the principal components of the original face image. These eigenfaces can be represented as the orthogonal basis vectors of a linear subspace called face space. Hence, each face image can be represented exactly in terms of linear combination of the eigenfaces. Thus, any face can be approximated by the best eigenfaces. The best eigenfaces have the largest eigenvalues, and hence, they account for most variance between the set of face images. During training, the training set of faces is converted into a vector of M × N × K, where M × N is the image size and K is the number of training faces. For face recognition, a new face image is projected into the face space and its position in the face space is compared with those of known training faces. Finally, a classification decision can be taken on the basis of this comparison.
Human-robot interaction and robots for human society
Published in Arkapravo Bhaumik, From AI to Robotics, 2018
The method of Haar classifiers proposed by Viola and Jones for real-time detection has come to be the standard for face detection algorithms and is the default in various image processing software suites. Though easy to implement, Haar classifiers are not exhaustive and lack in robustness, with the best results being for frontal views of faces. The method of Eigenface uses the image matrix of faces and compares it to a calibration and is accompanied with a pre-existing large database.
A comprehensive study on face recognition: methods and challenges
Published in The Imaging Science Journal, 2020
Parekh Payal, Mahesh M. Goyani
The most intuitive way to carry out face recognition is to see at the significant features of the face and contrast these with similar features of the other faces. The first real attempt to develop semi-automated facial recognition systems was started in the late 1960s and early 1970s and depended on geometry information [8]. Darwin's work includes an analysis of the different facial expressions due to different emotional states. During the 1960s, Woodrow Wilson Bledsoe built up a framework that could arrange photographs of appearances by hand, what's known as a RAND tablet (Random table). Relative distances were computed from landmarks to a common reference point and contrasted with reference information. During the 1970s, Goldstein improved accuracy by using 21 fiducial points including lip thickness and hair colour to identify faces automatically [8]. Sirovich and Kirby [9] applied Eigenface approach to solving the problem of face recognition using linear algebra.
Automated Defect Detection in Spent Nuclear Fuel Using Combined Cerenkov Radiation and Gamma Emission Tomography Data
Published in Nuclear Technology, 2018
Eva Brayfindley, Ralph C. Smith, John Mattingly, Robert Brigantic
Eigenfaces is PCA applied to the problem of facial recognition.15 The principal components are features across the data set. The “faces” are then reconstructed as combinations of features and subsequently classified using the coefficients in their feature combination.15 In the work presented here, the classifier is k-Nearest Neighbor (kNN), but this choice of method is a subject of future research. In both our image data sets, fuel rods are similar to faces in that they are round with regions of dark and light—a “face” here being simply an image with internal contrast. This implies a high degree of applicability of facial recognition techniques in the problem of defect detection in spent fuel. Sinograms present a side view, still of high contrast, of fuel rods, and future research will focus on using the sinogram directly rather than the reconstruction.