Explore chapters and articles related to this topic
Reliable Biomedical Applications Using AI Models
Published in Punit Gupta, Dinesh Kumar Saini, Rohit Verma, Healthcare Solutions Using Machine Learning and Informatics, 2023
Shambhavi Mishra, Tanveer Ahmed, Vipul Mishra
Feed-forward neural networks are also known as completely connected networks. Early architectures were based on feed-forward models that generally consisted of an input layer, an output layer, and a few hidden layers. Nowadays, the number of layers in deep learning architectures has expanded; deep architectures (networks with more hidden layers) are now referred to as deep neural networks (DNNs). Further, there are various variants of the DNN models, such as convolutional neural networks (CNN) designed primarily for images and video data. In recent years, CNNs have gained more popularity due to their ability to learn complex patterns from the data and parameter sharing. CNNs with more than 150 layers have been successfully trained and achieved superior performance on the benchmark datasets [30]. Recurrent neural networks (RNN) designed to deal with time-varying sequence data, and generative models for generating new data are some of the other variants of deep learning models.
Artificial Intelligence is Revolutionizing Cancer Research
Published in K. Gayathri Devi, Kishore Balasubramanian, Le Anh Ngoc, Machine Learning and Deep Learning Techniques for Medical Science, 2022
B. Sudha, K. Suganya, K. Swathi, S. Sumathi
Many studies have been published that deep learning-based models can diagnose cancer earlier and identify cancer subtypes directly from histopathologic and other medical images. Deep neural networks (DNNs) are potent algorithms applied to large images from biopsies or surgical resections with enough computing power. In addition, these model architectures have demonstrated their ability to classify images, such as determining whether or not a digitized stained slide contains cancer cells (Bejnordi et al. 2017; Khosravi et al. 2018; Liu et al. 2019; Al-Haija and Adebanjo 2020; Li et al. 2017; Korbar et al. 2017; Coudray et al. 2018; Iizuka et al. 2020; Campanella et al. 2019). The success of DNNs is not just limited to historic pathological photos. Still, it extends to other medical images acquired by non-invasive procedures like CT scans, MRIs, and mammograms and even suspected lesion photographs (Esteva et al. 2017).
Artificial Intelligence Based COVID-19 Detection using Medical Imaging Methods: A Review
Published in S. Prabha, P. Karthikeyan, K. Kamalanand, N. Selvaganesan, Computational Modelling and Imaging for SARS-CoV-2 and COVID-19, 2021
M Murugappan, Ali K Bourisly, Palani Thanaraj Krishnan, Vasanthan Maruthapillai, Hariharan Muthusamy
Wang et al. have developed a fully functional deep-learning model for COVID-19 detection using a large number of chest CT scan images collected from six regional cities in the Republic of China (Wang et al. 2020). A total of 5,372 subjects’ chest CT scan images (COVID-19: 1,266 subjects, CT-EGFR (epidermal growth factor receptor): 4106 subjects). Two deep-learning networks, namely, DenseNet-121 and COVID-19Net, are used for extracting the lung area from CT scan, and COVID diagnostics, respectively. Here, two transfer-learning algorithms are used to extract 64-dimensional deep-learning features from DenseNET and combined with clinical features (sex, age, and comorbidity) to develop a multivariate Cox Proportional Hazard (CPH) model to predict chances of the patient needing a long hospital stay to recover. The performances of deep-neural networks are assessed through the Area Under Curve (AUC), and the maximum value of AUC achieved for training, and testing is 0.90, and 0.86, respectively. Besides, the researchers used deep-learning visualization algorithms to identify the most common lung region affected by COVID-19 patients.
Body activity grading strategy for cervical rehabilitation training
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2023
Grading of body activities is frequently seen in sports games such as gymnastics, diving, figure skating, synchronized swimming, etc. where the performance of an athlete is evaluated based on the accuracy of body postures and movement, difficulty level, synchronization, interactions with apparatus and the overall gracefulness. To realize machine-based evaluation, the strategy has to be adapted to deal with complicated classification and reasoning. In Zhu et al. (2019; Liao et al. 2020; Zhang et al. 2020), deep neural network and convolutional neural network based on deep learning are used to discover representative features from a sequence of performance measurements. A scoring function is then applied to convert them into quality scores. These algorithms present generalized frameworks to process different rehabilitation exercises and achieved good classification and evaluation results. However, they differ from human graders as the grading rule is hidden in the framework, and wrong motion patterns are not presented to the user. Moreover, deep learning is essentially a computation-intensive method that requires a large amount of training data and ground truth evaluation results. It is unsuitable to be applied to mobile apps that require fast development iterations and frequent content change.
Fused CNN-LSTM deep learning emotion recognition model using electroencephalography signals
Published in International Journal of Neuroscience, 2023
To learn the different levels of data abstraction, the networks in the deep neural networks are decomposed into multiple layers of processing. The Hybrid architecture of the Convolution neural network (CNN) and Long Short Term Memory (LSTM) has been implemented for the classification of emotions (Valence and arousal). The hybrid model is composed of alternating layers of CNN and several recurrent layers of LSTM. The LSTM-RNN has been implemented due to the highly non-linear and temporal nature of EEG signals. To make EEG data suitable for CNN analysis, the highly structured data need to be formulated which consists of 2D data frames (time*channel) associated with a specific trial. The spatial and temporal features of the given input EEG signal has been extracted using CNN and LSTM respectively. The fully connected layer converts the output size of the previous layers into several emotion classes as shown in Figure 3. The softmax layer computes the probability of each emotion over all target classes and finally, the output layer computes the cost function. The softmax function is defined as:
ECG classification system based on multi-domain features approach coupled with least square support vector machine (LS-SVM)
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2023
Russel R. Majeed, Sarmad K. D. Alkhafaji
More recently, Al Alkeem et al. (2021) combined ECG, fingerprint, and facial image to identify human. Two types of neural networks were employed ResNet50, and deep learning models to extract features from the ECG, fingerprint, and facial image. Hamza and Ayed (2020) identified human using ECG signals. Three features named entropy, cepstral coefficient and ZCR were employed to identify human. A SVM was adopted to classify the extracted features. Physionet database was used in the evaluation phase. Rabinezhadsadatmahaleh and Khatibi (2020) analyzed heart beats form ECG to design an authentication model. An ensemble classification model was developed integrating deep leaning technique with SVM. They showed the advantages of integrating both of conventional machine learning models and deep neural networks. An average of accuracy of 99.02, FAR of 0.95 and FRR of 1.02 was obtained.