Explore chapters and articles related to this topic
Supervised Learning
Published in Subasish Das, Artificial Intelligence in Highway Safety, 2023
To mitigate this loss, a classifier is trained by making strong negative or positive predictions for each example, and for ones that it is incorrect about, making predictions with a minimal magnitude possible. A support vector machine or SVM is a linear classifier taught with hinge loss.
Machine Learning Applications In Medical Image Processing
Published in Sanjay Saxena, Sudip Paul, High-Performance Medical Image Processing, 2022
Tanmay Nath, Martin A. Lindquist
The most common category of ML algorithms is supervised learning, where the algorithm uses a training dataset to learn a mapping between input variables (or features) X and an output label y [17]. The algorithm uses this mapping to predict the output labels of an unseen new dataset (also known as the testing dataset). If Dtraining = {(x1, y1), (x2, y2)..., (xn, yn)} represents a training dataset, where xi ∈ X is the i-th input data sample and yi represents its corresponding output label, then the supervised learning algorithm estimates parameters θ of the mapping function f : X → Y. In terms of medical image processing, xi could be measurements from an image for subject i and yi could be the potential outcome (e.g., benign, or non-benign cancer). Generally, there are two ways for the algorithm to learn the parameters θ of the mapping function [18]: Maximize the score (ℓ) between the ground truth labels (ytrue) and the predicted labels (ypred) using the equation: maxθ1n∑inℓ(yi,f(xi,θ))One such score function is the hinge loss which maximizes the margin between the classes. A hinge loss can be expressed as loss = maximum (1 – (ytrue × ypred),0) where the values of ytrue are expected to be –1 or 1. It is commonly used in support vector machines (SVMs) [19]. Briefly, SVM is a supervised ML algorithm which makes the decision boundary separating two classes as wide as possible.Minimize the loss (ℒ) over Dtraining using the equation: argminθ1n∑inℒ(yi,f(xi,θ))
A global context and pyramidal scale guided convolutional neural network for pavement crack detection
Published in International Journal of Pavement Engineering, 2023
We have performed an ablation study on the Crack500 dataset to demonstrate the impact of the loss function on the model’s performance. We have trained four networks, one with the loss function of BCE, the next one with the combined loss function of BCE and TL, another one with the combined loss function of BCE and LHL, and the last one with the combined loss function of BCE, LHL, and TL function, other conditions remain unchanged, and the results are shown in Table 6. As we know that BCE loss is best for binary classification with equal data distribution. In general, the cracks cover a small area of the whole image for which the background (non-crack area) class is trained several times. As a result, the model tends to predict the background class more frequently. To counteract the effect of this, we use the TL function with BCE to control the penalties for FP and FN. From Table 6, TL results are better balanced in terms of precision and recall. We have used the Lovász Hinge Loss function to optimise it with gradient descent. We can see that by combining all three loss functions, our model achieves the best results.
Effective Processing of Convolutional Neural Networks for Computer Vision: A Tutorial and Survey
Published in IETE Technical Review, 2022
Ronald Tombe, Serestina Viriri
The hinge-loss function normally apply in training classifiers with large margins e.g. the support vector machines (SVM). Hinge-loss function for multi-class SVM is mathematically defined as In this case when = r, else . Observe that when p = 1, equation (14) is hinge loss function (-loss ), whereas if p = 2, it is a squared hinge loss function (-loss) [39]. The investigation [40] compares performance of of SVMs with softmax in deep networks. The performance results on MNIST [41] show that SVM is superior to the softmax function.
Ramp loss for twin multi-class support vector classification
Published in International Journal of Systems Science, 2020
Huiru Wang, Sijie Lu, Zhijian Zhou
The standard SVM above applies the Hinge loss function: , where s represents the position of the Hinge point, to penalise samples classified with an insufficient margin. The objective function of SVM can also be expressed as follows, With the application of the Hinge loss function, the loss of a outlier will be a large value when the outlier is far away from the hyperplane. Eventually, the outliers become SVs and shift the hyperplane toward themselves in an inappropriate manner. Therefore, the SVM is very sensitive to the existence of outliers.