Explore chapters and articles related to this topic
Convolutional Neural Network-Based Pattern Recognition Technique for Diabetic Retinopathy Detection
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
Rahul K Chaurasiya, Astha, Anmol Sarna
There has been speculation that low variances and high bias was a good feature for image classifiers and the studies have proved them to be true. However, it must also be taken into account that these are not only the two major factors. The algorithms must also possess advance feature detection capability (Seoud, L.,2015). ML techniques based on k-nearest neighbor (KNN) and linear support vector machines (SVM) have also been employed (Choudhary, S. S., 2016). A comparison of various tradition feature extraction based pattern recognition techniques shows that SVM, subspace discriminant analysis, and boosted tree provided comparative performance with average classification accuracy of 81%, 82%, and 82%, respectively (Karanjgaokar, D., 2017). BPSO-based feature reduction method has also been employed to reduce the dimension and improve the accuracy of classification (Karanjgaokar, D., 2017). However, a properly tuned multi-class kernel-based SVM method has produced a best accuracy of 95% in detection of DR (Adarsh, P., 2013).
Perception
Published in Hanky Sjafrie, Introduction to Self-Driving Vehicle Technology, 2019
By contrast, object detection is about identifying objects in your vicinity that may pose a hazard, in particular moving objects (DATMO or detection of moving objects). This is about answering the question, ’How can I avoid collision with another object?’. As explained in Section 3.4, object detection consists of three main sub-problems: object localization, object classification and semantic segmentation. The common approach to achieve this requires a combination of feature extraction and classification. Feature detection can be done using techniques such as histogram of oriented gradients, scale-invariant feature transform or maximally stable extremal regions. All these approaches try to extract the intrinsic characteristics that allow a given feature to be distinguishable from other features. Classification then seeks to compare these features to known features in order to classify them into one of several categories, e.g., pedestrian, vehicle, footpath, etc. The main techniques here are support vector machine, random forest and artificial neural networks.
Feature Extraction and Classification for Environmental Remote Sensing
Published in Ni-Bin Chang, Kaixu Bai, Multisensor Data Fusion and Machine Learning for Environmental Remote Sensing, 2018
The concept of feature extraction refers to the construction of a set of feature vectors from the input data to facilitate the complex decision-making process. Generally, a process of feature extraction should consist of the following three essential steps: (1) feature detection, (2) feature construction, and (3) feature selection. Feature detection aims at computing abstractions of the input image by finding the inherent unique objects, such as edges or blobs. More specifically, it refers to a discriminative process involving making local decisions at every pixel (i.e., image point) to examine whether a given type of feature is presented at that point. The rationale is to find targets of interest that can be used for feature construction and to finally choose an appropriate feature detector. Based on the types of image features, common feature detectors can be grouped into three categories: (1) edge, (2) corner, and (3) blob. In terms of a feature detector, the desirable property is repeatability, which means that the detected features should be locally invariant, even in the presence of temporal variations or rotations. The same feature should be detectable in two or more different images of the same scene, regardless of changes in orientation or scale.
Image decomposition-based sparse extreme pixel-level feature detection model with application to medical images
Published in IISE Transactions on Healthcare Systems Engineering, 2021
Geet Lahoti, Jialei Chen, Xiaowei Yue, Hao Yan, Chitta Ranjan, Zhen Qian, Chuck Zhang, Ben Wang
Image segmentation is an important task in computer vision. It involves partitioning an image into multiple segments. Each segment can be regarded as consisting of a group of pixel-level features (Sanjay-Gopal & Hebert, 1998). Therefore, a segmentation task can also be regarded as pixel-level feature detection, which is encountered in domains such as manufacturing and medicine. For example, (a) in manufacturing systems, an image of a product is sensed for product inspection; (b) in medicine, an image of a body part is acquired for disease diagnosis and treatment planning. Both product inspection and disease diagnosis require detecting relevant pixel-level features from the image. In the manufacturing example, feature detection involves extracting defects or anomalies (Yan et al., 2017), whereas in the case of medical imaging, it involves segmenting pathological structures (Chen et al., 2019). Since images are high-dimensional and exhibit complex spatial structure, feature detection is challenging. Moreover, image sensing and acquisition systems introduce measurement noise that reduces the contrast between the background and features, thus further complicating the feature detection task (Yan et al., 2017).
Personalized teleoperation via intention recognition
Published in Advanced Robotics, 2018
Serge Mghabghab, Imad H. Elhajj, Daniel Asmar
In 1998, the CNN was introduced in order to solve the training duration problem using a more efficient interconnection between its neurons [13]. CNN mainly relied on four layers; convolution layer, subsampling layer, activation layer and the fully connected layers. The use of these layers with the appropriate configuration and kernels size allowed CNN to have the following characteristics:Ability to detect non-linear complex relationships between variables. The feature detection is done by convolution layers that study edges, curvatures, shapes and many other complex features.Ability to detect the feature in an image regardless of its location, due to the shared weights and shift-invariance architecture.Reduced number of parameters required in the training process due to the sharing of weights and biases in each feature map.
A new artificial intelligent approach to buoy detection for mussel farming
Published in Journal of the Royal Society of New Zealand, 2023
Ying Bi, Bing Xue, Dana Briscoe, Ross Vennell, Mengjie Zhang
Feature detection and description is important for solving object recognition or detection tasks. Feature detection aims to detect representative information by finding or locating keypoints, i.e. corners, blobs, lines/edges, and morphological regions (Ma et al. 2021). Typical keypoint detection and description methods include scale-invariant feature transform (SIFT), speeded up robust features (SURF), and oriented FAST and rotated BRIEF (ORB) (Tareen and Saleem 2018).