Explore chapters and articles related to this topic
Neural Network Survival Analysis
Published in Prabhanjan Narayanachar Tattar, H. J. Vaman, Survival Analysis, 2022
Prabhanjan Narayanachar Tattar, H. J. Vaman
The first version of the neural network model is the simplistic perceptron model. In a simple network, the input layer is made of covariates as in the neurons, the arc weights connect the input layer to the output layer, and the output layer consists of a single neuron which is the output variable. An activation function accumulates the weighted inputs. The perceptron model can learn linear patterns but it is not able to handle nonlinear patterns. We will see later in the chapter that multiple layers are important for a NN to handle nonlinear data. Construction of a neural network relies heavily on the notion of activation functions. Use of the sigmoid activation function has lead to considerable research leading to improvisation in application of NN to classification problems. Considerable research has gone into improvisation of ML towards use of NN in addressing classification problems. Technically, it is possible to have multiple outputs in NN with the same architecture. In survival analysis, this flexibility translates into one output for the time to event and a second output for the event indicator. However, most software implementations do not support such a task.
Machine Learning for Disease Classification: A Perspective
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Jonathan Parkinson, Jonalyn H. DeCastro, Brett Goldsmith, Kiana Aran
Recent years have witnessed dramatic growth in the diversity of neural network model structures described in the literature. The simplest structure is the multi-layer perceptron (MLP), sometimes also called a fully connected neural network. It consists of a sequential arrangement of layers; the first layer processes the input, while subsequent layers process the output of the previous layer. Each layer follows the general form:Where G is a nonlinear function selected from a variety of “activation functions”, W is a weight matrix, x is an input column vector and b is a vector of bias terms (analogous to a y-intercept). Common activation functions in modern neural networks include the sigmoid function, the rectified linear unit (ReLU) and the softmax function (Goodfellow et al., 2016). SoftMax function. The term deep learning usually refers to neural networks with multiple layers. In many neural network architectures, the output of a layer is normalized before being passed to the next layer, which has been shown to speed up convergence during training and can reduce the risk of overfitting (Ioffe & Szegedy, 2015).
Neural Networks for Medical Image Computing
Published in K. Gayathri Devi, Kishore Balasubramanian, Le Anh Ngoc, Machine Learning and Deep Learning Techniques for Medical Science, 2022
V.A. Pravina, P.K. Poonguzhali, A Kishore Kumar
Classifying medical images is one of the most important problems in medical diagnosis. This requires a good classification algorithm that helps in detecting diseases. A Coding Network with Multilayer Perceptron is used to combine both high-level features and traditional features [10]. The high-level features are obtained using a deep convolutional neural network. And a MultiLayer Perceptron neural network model is trained such that it can map the features into the feature space. ReLus is used as an activation function. The multi layer perceptron comprises of softmax layer and fully connected layer. The model was applied on skin lesion dataset for classifying skin tumors into Melanoma and Nevus. It resulted in acquiring good discriminative features for medical images and resulted in 90.1% of accuracy.
Advancing cervical cancer diagnosis and screening with spectroscopy and machine learning
Published in Expert Review of Molecular Diagnostics, 2023
Carlos A. Meza Ramirez, Michael Greenop, Yasser A. Almoshawah, Pierre L. Martin Hirsch, Ihtesham U. Rehman
Neural networks are computational models which can classify or organize datasets. Schematically, a neural network model consists of different processing units which are connected and run in parallel. These processing units aim to mimic a neuron from a biological brain where a series of decisions occur. A neural network may consist of an ‘n’ number of layers, each with an ‘n’ number of process units (Figure 2) [24,49,50]. The number of layers depends on the type of neural network; shallow or deep neural network. Whereas the number of processing units depends on the model input defined by the user. A shallow neural network may have between 5 and 10, but this number may differ [24,49,50]. The product of the mathematical process that occurs on each processing unit is classified by the activation function. In other words, the activation function is responsible for characterizing the data by classes, or groups [24,49,50]. Based on the author’s experience, in spectroscopy it is recommended to use ReLu, sigmoid, or ReLu activation functions, for binary or multiclass predictions, however, this is subject to the users’ criteria.
A step towards the application of an artificial intelligence model in the prediction of intradialytic complications
Published in Alexandria Journal of Medicine, 2022
Ahmed Mustafa Elbasha, Yasmine Salah Naga, Mai Othman, Nancy Diaa Moussa, Hala Sadik Elwakil
Hojun L et al. [36] created a model capable of predicting intradialytic hypotension. They applied a deep learning model using data from 261,647 hemodialysis sessions, divided them into training (70%), validation (5%), calibration (5%), and testing (20%) sets. Their artificial neural network model achieved an AUC of 0.94 (95% confidence intervals, 0.94 to 0.94). They used three definitions for IDH. IDH-1 was defined when intradialytic nadir systolic BP was <90 mm Hg within 1 hour [2]. When IDH was defined as a decrease in systolic BP of ≥20 mm Hg and/or a decrease in mean arterial pressure of ≥10 mm Hg within 1 hour, the reference BPs were determined at initial (IDH-2) or prediction (IDH-3) time point. Variables recorded included age, sex, vital signs, comorbidities, medications, and laboratory findings. However, our prediction model not only included intradialytic hypotension but also examined overall intradialytic complications instead of only focusing on the hypotension.
Practical foundations of machine learning for addiction research. Part I. Methods and techniques
Published in The American Journal of Drug and Alcohol Abuse, 2022
Pablo Cresta Morgado, Martín Carusso, Laura Alonso Alemany, Laura Acion
Among the most expressive models, artificial neural networks process the information from predictor variables with a combination of simpler, connected classifiers called artificial neurons. They may be connected through successive layers stacked on top of each other, thus becoming deep artificial networks (21). Each layer transforms the data and the last layer produces the prediction. During neural network model training, the model predicted values are compared with actual observations, obtaining a measurement of how near the prediction was from the observation. An optimization algorithm then carries out the learning process, adjusting how the data are transformed within each layer to reduce the error between prediction and observation, usually by slightly modifying the connections between artificial neurons (21). This whole process runs iteratively until obtaining the best performance. Neural networks can capture complex dependencies among variables. As a drawback, they are very prone to overfitting, even if they incorporate many methods to mitigate it. Moreover, it is usually hard to understand each predictor’s contribution to the outcome.