Explore chapters and articles related to this topic
Artificial Intelligence and Networking
Published in Vikas Kumar Jha, Bishwajeet Pandey, Ciro Rodriguez Rodriguez, Network Evolution and Applications, 2023
Vikas Kumar Jha, Bishwajeet Pandey, Ciro Rodriguez Rodriguez
The technological concepts of AI, ML, ANN, and deep learning are very much interrelated to each other with no clear-cut demarcations between them. We can think of deep learning as one popular technique of a broader area of machine learning. Deep Learning is that machine learning technique which takes inspiration from the functioning of the human brain where the learning models are built on top of ANN. The term “deep” in deep learning refers to the learning technique having multiple layers in the neural network. It uses huge neural networks that have many layers of processing units, with machines having advanced computing power and improved training techniques to learn complex patterns in large amounts of data. It is used for solving complex problems with having data that is huge, diverse, and less structured. Usages such as image recognitions, speech recognitions, automated driving, industrial automations, medical research, aerospace, and defense have common applications of deep learning.
Use of Machine Learning in Healthcare
Published in Punit Gupta, Dinesh Kumar Saini, Rohit Verma, Healthcare Solutions Using Machine Learning and Informatics, 2023
Machine learning algorithms are broadly split into supervised and unsupervised learning. Both categories comprise a variety of algorithms that are used to implement mathematical models.Supervised learning comprises labeled training data. They mainly focus on classification and regression problems. Some examples of supervised learning algorithms are: random forest, decision trees, naïve Bayes models, and support vector machine (SVM).Unsupervised learning uses unlabeled data for model training. The most common algorithms for unsupervised learning are K-means clustering and deep learning.
High-Performance Computing and Its Requirements in Deep Learning
Published in Sanjay Saxena, Sudip Paul, High-Performance Medical Image Processing, 2022
Biswajit Jena, Gopal Krishna Nayak, Sanjay Saxena
Reinforcement learning [37] is what the robots are programmed with and to capture the entire subject in a small space is literally tough; hence, a basic outline of what it is very much important to truly understand semi-supervised learning. Unlike in the above-discussed subsections, there are no categories to classify or there is anything to predict; here, we need to give a “reward” to the robot to increase or decrease the probability of a certain action it takes. The robot may visit the current state or may not visit it again, but we need to specify the rewards for each action that can be taken in a particular state. Thus, as we are not exactly saying which action to take at a specific state or giving instructions about which path to follow, and we are providing it with only rewards based on which the robot has to act, it becomes semi-supervised. Consider a dog, when you throw a ball, if it comes back picking up a stick, you give a reward, but when it picks up the ball and comes back, you give it a reward, and it will understand that whenever the ball is thrown, it should get the ball. This simple idea is a very basic example of how reinforcement learning in robots works, and there are many more algorithms in reinforcement learning, including deep learning algorithms such as Deep Q-Learning. One can always find interdisciplinary research going with reinforcement learning and other fields of machine learning.
A self-validation Noise2Noise training framework for image denoising
Published in The Imaging Science Journal, 2023
Asavaron Limsuebchuea, Rakkrit Duangsoithong, Jermphiphut Jaruenpunyasak
Unsupervised learning is a technique that allows for training deep learning networks using less paired or even unpaired data. This approach has the potential to be used for single image and blind denoising problems. There are various unsupervised techniques for single image and blind denoising that this research can be classified based on what is manipulating the learning process. Examples include Noise2Self (N2S) [31], Noise2Void (N2V) [30], Self2Self (S2S) [32], and complex-valued deep CNN [34]. These are self-learning methods that manipulate convolution kernels in the network while training, such as dropout techniques, blind-spot kernels, or gradient-like features, and use the same noisy image for both input and validation during training. These are the different types of network manipulation for the self-supervised framework. Another type is data-domain manipulation of the existing noisy image to create new training to pair itself. Recorrupted-to-Recorrupted (R2R) [33] is one of these types, which generates validation by manipulating the noisy image input to learn to transform the input itself into another domain. However, using a single network for denoising may not always produce the best results due to variations in input noise environments. Therefore, sequential ensemble learning [58] is designed to denoise images by using multiple denoising networks in order to achieve the best possible results. This can involve using multiple unsupervised methods or a combination of denoising algorithms, but it can also be time-consuming due to the use of multiple networks.
Retinal blood vessels segmentation using Wald PDF and MSMO operator
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Sushil Kumar Saroj, Rakesh Kumar, Nagendra Pratap Singh
Figure 16 shows the segmented images of 1-test.tif image by the different methods. If we compare the quality of segmented images of Figure 16, then we find that the segmented image of the proposed method has quality better and is very similar to the ground truth image. If we see the overall performance (precision, specificity, accuracy, MCC, MSE, and MAD) then the proposed method is found better than several methods of this field. Also, supervised methods require prior knowledge of labelling. Unsupervised methods require more time to segment vessels. Deep learning methods require a very large amount of data to perform better than other techniques. Deep learning methods are extremely expensive to train due to complex data models. In this context, the performance of the proposed method becomes more significant. Improved performance of the proposed method is due to the close matching of the Wald matched filter kernel with vessel intensity profile (we can see this from Figures 7 and 8 where both curves are very similar) and efficient pre-processing method used.
Classification of brain tumours from MR images with an enhanced deep learning approach using densely connected convolutional network
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
R. Meena Prakash, R. Shantha Selva Kumari, K. Valarmathi, K. Ramalakshmi
Transfer learning is a popular approach in deep learning where pre-trained models are used to train new data so that knowledge transfer occurs. This reduces training time, and the performance of the CNNs increases. It is inferred that many research works are being carried out in the domain of transfer learning especially for medical image classification where limited data sets are available. In the proposed method, transfer learning with DenseNet for brain tumour classification is implemented. To improve the classification accuracy, optimisation of hyper-parameters of CNN with random search is introduced. The contributions of the article include (i) implementing brain tumour classification using transfer learning with DenseNet, (ii) optimising the CNN model by tuning the hyper-parameters to improve the network performance, (iii) comparing the performance of DenseNet Architecture and VGG16 architecture with and without tuning of hyper-parameters, and (iv) comparing the performance of DenseNet with SVM, AlexNet and state-of-the-art methods.