Explore chapters and articles related to this topic
A Review of Automatic Cardiac Segmentation using Deep Learning and Deformable Models
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Behnam Rahmatikaregar, Shahram Shirani, Zahra Keshavarz-Motamed
Loss function (loss(y(n),yˆ(n))) is a criterion that computes the difference between the output layer y and the labels yˆ for each input data. The summation of the loss function on all training datasets will result in the cost function. Cross-entropy is the most common loss function for image classification and segmentation tasks.
The Future of Directional Airborne Network (DAN): Toward an Intelligent, Resilient Cross-Layer Protocol Design with Low-Probability Detection (LPD) Considerations
Published in Fei Hu, Xin-Lin Huang, DongXiu Ou, UAV Swarm Networks, 2020
Fei Hu, Yu Gan, Niloofar Toorchi, Iftikhar Rasheed, Sunil Kumar, Xin-Lin Huang
The following two issues need to be solved: How to create a trapdoor: We need to expand the original DAN network training data space by injecting new samples with trapdoor perturbations. Then the injection function I(.) driven by the trapdoor (M, b, r) should be defined. Here M is the trapdoor mask, which tells how much perturbation should be added; b is the baseline random perturbation (which could be a simple random noise); and k is called mask ratio, which is a small number (≪1) that tells how many samples could be perturbed.How to train the trapdoor DNN: The new model should be able to produce a high classification accuracy for normal inputs but easily classify a trapdoor-embedded sample into a special label pattern that will serve as “trapdoor signature”. The optimization objectives that reflect the addition of backdoors need to be defined. We may use cross-entropy based loss function to measure the classification errors. After training the trapdoor-added DNN, the “neuron signatures” will be stored a database for future trapdoor matching purposes.
Deep Learning for Drawing Insights from Patient Data for Diagnosis and Treatment
Published in Sandeep Reddy, Artificial Intelligence, 2020
Dinesh Kumar, Dharmendra Sharma
We perform end-to-end training of the VGG16 network on the HAM10000 dataset. As described in Section 6.3.2.1, weights of the feature extraction part of VGG16 are frozen from updates via the back-propagation operation, while the classifier layer weights would be updatable. The last layer in the classier is replaced with a new layer containing seven neurons to match the number of classes in the HAM10000 dataset. The model is trained for 20 epochs with a fixed learning rate of 10−4. Stochastic gradient descent and cross-entropy are used as learning and loss function, respectively. We use a weight decay of 10−4 and a momentum of 0.9. For training, we use a batch size of four and one for testing. We implement our models using PyTorch version 1.2.0 on a Dell Optiplex i5 48GB RAM computer with Cuda support using NVIDIA GeForce GTX 1050 Ti 4GB graphics card.
A study on relationship between prediction uncertainty and robustness to noisy data
Published in International Journal of Systems Science, 2023
Hufsa Khan, Xizhao Wang, Han Liu
We can see in Equation (7) that the logarithm function (of any base) is strictly monotonically increasing, so with more events, the entropy increases. In other words, uncertainty is increased with the increase of entropy. On the other hand, cross entropy describes how close the predicted probability distribution is to the true distribution. In the classification task, to best classify data we want to maximise the probability of the ground truth class and thus minimise the loss. Here, it is worth noticing that cross entropy takes into account only the probability of the ground truth class, while COT considers only the probability values of the non-ground truth classes. In this study, in addition to cross entropy, COT is added as a regularisation term in the modified loss function. The objective of COT is to maximise the uncertainty of distinguishing those non-ground truth classes, i.e. to maximise the entropy calculated on the basis of the normalised probability values of those non-ground truth classes.
Leak detection in water distribution network using machine learning techniques
Published in ISH Journal of Hydraulic Engineering, 2023
Nishant Sourabh, P.V. Timbadiya, P. L. Patel
Boyce et al. (2002) states that the confusion matrix, especially used in classification problem, compares the predictions of the model with its respective actual or observed values. The comparison of the predicted and actual leaking pipe in the network can be seen as confusion matrix in Figure 7. This clearly indicates that the accuracy of the ANN model is 91.2%, i.e. correctly predicting leak in the 31 out of 34 pipes. Cross-entropy can be used as a loss function when optimizing classification models like logistic regression, ANN and SVM. The objective function cross-entropy has been used in the testing and determining the accuracy of the ANN and SVM classification models. Cross-entropy is commonly used in machine learning as a loss function and is a measure of the difference between two probability distributions. The classification models also work on this principle. Whereas, the sum of squared error is used in the case of regression models.
Cascaded Model (Conventional + Deep Learning) for Weakly Supervised Segmentation of Left Ventricle in Cardiac Magnetic Resonance Images
Published in IETE Technical Review, 2023
Taresh Sarvesh Sharan, Romel Bhattacharjee, Alok Tiwari, Shiru Sharma, Neeraj Sharma
For a set of events, a measure of the difference between the probability distribution is known as cross-entropy. Using cross-entropy as a loss function instead of other loss functions like the sum of squares leads to faster training and better generalization of the model. The loss calculated by cross-entropy is the average per-pixel loss and the per-pixel loss is calculated discreetly. As a result, cross-entropy considers loss in the micro sense. Cross entropy is defined as [35]: where m is the number of samples, yi is the label of the sample, and pi is the predicted probability value, . If there is a class imbalance in the training set, training will be difficult. To overcome this, the loss function is conjugated with dice loss which is a statistic developed in the 1940s to gauge the similarity between two samples. It was brought to the computer vision community by Milletari et al. [36] in 2016 for 3D medical image segmentation. Dice loss is defined as: The segmented results were evaluated using well-known metrics namely Jaccard Index, Dice score, and Directed Hausdorff Distance.