Explore chapters and articles related to this topic
Real Time Face Detection using Raspberry Pi 3B+
Published in Rajesh Singh, Anita Gehlot, P.S. Ranjit, Dolly Sharma, Futuristic Sustainable Energy and Technology, 2022
V Sai Ganesh Reddy, Navjot Rathour, G Siddhu Ganesh, Yaswanth, Satish Kumar
In facial detection and recognition, the important step in training is we can use a standard machine learning model. The machine learning model is always been trained to recognize certain types of patterns. In this project, we can be trained to recognize faces. So that the trained model can identify the actual person via using 128-d Embeddings. It is unique for every face there are so many options available like a random forest (i.e., In the algorithm of machine learning versatile and powerful supervised machine learning is Random Forest algorithm and to create forest it combines trees in multiple decision.), we can use model of SVM and also, we can use the K-NN model machine learning also. When we come to training a smaller data set, the k-NN model is useful for the Face recognition library and dlib. In this work the classifier which is more robust known as SVM (support vector machine). It has been archived with the help of sci-kit-learn. in this process, the kernel is a radial basis kernel. The kernel is quite tricky when compared to the linear kernel. The process of using this kernel is known as grid searching. In this, we can use grid searching, by using this kind of searching it is helpful to find optimal parameters during the machine learning for a particular model. Fig 8 tells about the training flow chart for unique images.
Fast method of face recognition in real time using Raspberry-Pi and Intel Movidius NCS
Published in Rajesh Singh, Anita Gehlot, Intelligent Circuits and Systems, 2021
Navjot Rathour, Rajesh Singh, Anita Gehlot
The next important step is training a standard machine learning model so that the trained model could be able to identify an actual person via those extracted 128-d embeddings that are unique for each face. Plenty of options are available, like Random Forest, SVM, and k-NN. When it comes to training a smaller dataset, k-nearest neighbour is useful via the face_recognition [18] library and dlib [17]. In this work, a more robust classifier known as support vector machine (SVM) is used. It has been achieved with the help of Scikit-learn. The kernel that has been used in this process is the Radial Basis kernel [19]. The kernel is quite tricky to tune when compared with another linear kernel. So, to use this kernel, a process known as ‘grid searching’ has been used. This process is helping to find the optimal parameters during machine learning for a particular model.
Prescriptive and Predictive Analytics Techniques for Enabling Cybersecurity
Published in Kuan-Ching Li, Beniamino DiMartino, Laurence T. Yang, Qingchen Zhang, Smart Data, 2019
Nitin Sukhija, Sonny Sevin, Elizabeth Bautista, David Dampier
In recent years, there has been an evolution of many machine learning libraries and frameworks, many of which are free and open-source, that can be used to implement the algorithms described in this chapter. For instance, TensorFlow is a popular library developed by Google with APIs for Python, C++, Java, Go, and even JavaScript [78], which includes both high-level APIs, like Keras, and low-level APIs for advanced uses. Moreover, Python is one of the most popular languages for writing machine leaning systems with many mature libraries available [72]. Some of the more popular libraries are Scikit-learn [79], PyBrain [80], and Pylearn2 [81]. The C++ language, which has more performance capacity than python at the trade-off of having higher development time, has libraries like Shark and dlib-ml for writing machine-learning code [82]. For Java users, there also exists libraries, such as Java-ML [7], Encog [83], and MULAN [84]. Furthermore, the Apache Spark is a widely used data analytic platform that includes the MLlib library for distributed machine learning [85]. The MLlib provides APIs for several popular languages, like Java, Python, and Scala, and provides excellent performance for analyzing large datasets on distributed systems.
Deep-fake video detection approaches using convolutional – recurrent neural networks
Published in Journal of Control and Decision, 2023
Shraddha Suratkar, Sayali Bhiungade, Jui Pitale, Komal Soni, Tushar Badgujar, Faruk Kazi
For detecting Deep-fakes in the video, the method focuses on the faces in the video frames. Dlib (Davis king) library is used for detecting the faces in the frames of a video dataset. After detecting the faces from the frames of the video, the next important step is to exact the facial features. The process of extraction of facial features involves the extraction of facial landmarks such as eyes, nose, and lips. In this process, a large number of pixels present in the face images are represented in such a way that only the image’s interesting parts are captured and analysed efficiently. The proposed method uses CNN (Figure 2) as a facial feature extractor. CNN comprises various sorts of layers viz., Convolutional layer, Max-pooling layer in addition to fully connected layer.
Multimodal face shape detection based on human temperament with hybrid feature fusion and Inception V3 extraction model
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Srinivas Adapa, Vamsidhar Enireddy
In this research, multimodal data-based face shape detection based on human temperament was performed using hybrid feature fusion analysis and the Inception V3 extraction model. Recently, the automatic detection of face shape detection from multimodal face image representation has received much attention and remains one of the most important challenges in the field of affective computing. Here, the video is captured by Logitech C270 HD Web Camera via a live recording session. The major aim of the research is the detection of face shapes such as oval, round, triangular and square based on four basic human temperaments such as sanguine (air), phlegmatic (water), choleric (fire) and melancholic (earth). Therefore, a deep learning-based face shape detection using human temperament is proposed and shows the best performance over other DL models. A Dlib library was used to detect facial landmarks by extracting image values. In the feature extraction stage, facial features such as eye distance, nose length, forehead length and width, and lip thickness are extracted from the image. The intrusion of the Inception V3 model is utilizedutilised to extract the intrinsic features from the video. The proposed model shows high performance in detection and achieved accuracy (98.51%, 98.86%), precision (96.14%, 97.89%), recall (96.34%, 97.95%), F-measures (96.24%, 97.94%), FPR (0.0093, 0.0085) and FNR (0.0365, 0.0352) under two datasets. In addition, the statistical test is also conducted to show the efficiency of the proposed method.
Multi-Index Measurement of Fatigue Degree Under the Simulated Monitoring Task of a Nuclear Power Plant
Published in Nuclear Technology, 2021
Gang Zhang, Shibo Mei, Kaijie Xie, Zhen Yang
Face videos of the subjects were recorded in the experimental process, and P80 was calculated by clipping the last 2-min eye blinking video of each experimental subject. After the streaming video was read, the dlib library in OpenCV was used to acquire the eye position. The dlib library can directly acquire the eyelid coordinates to calculate the opening and closing degrees. However, the oculogyration of each subject fluctuated severely during the operation. The upper to lower eyelid distance in each frame was obtained. Nevertheless, this value was still unstable. On this basis, Junaedi and Akbar’s21 PERCLOS calculation method was taken as a reference. First, the eye position in each image was determined via the dlib library. The eye image was converted into a grayscale image and binarized. Thereafter, the pupil boundary size was determined. Finally, the quantity of the pupil y-coordinate pixel points in each video frame was calculated. The difference from Junaedi and Akbar’s method was that the open operation (swelling and then corrosion) was not carried out for the acquired pupil image because the light source was stable in our preliminary experiment. The acquired pupil image also had few missing pixel points.