Explore chapters and articles related to this topic
Deep Learning Networks for Automated Scoring Applications
Published in Duanli Yan, André A. Rupp, Peter W. Foltz, Handbook of Automated Scoring, 2020
Historically, AlexNet (Krizhevsky et al., 2012) was the first – and one of the most influential – CNNs and won the first place of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012. Two years later, the champion of the ILSVRC 2014 competition was GoogleNet (Szegedy et al., 2014), which consisted of a CNN that was 22 layers deep but reduced the number of parameters from AlexNet’s 60 million to four million. The most recent and major improvement of CNNs is a novel architecture called residual neural network (ResNet) (He et al., 2016a, b), which introduced techniques such as skip connections and heavy batch normalization and won the first place of ILSVRC 2015; this framework contains 152 layers and beat human-level performance on the ImageNet dataset.
Deep Learning in Brain Segmentation
Published in Saravanan Krishnan, Ramesh Kesavan, B. Surendiran, G. S. Mahalakshmi, Handbook of Artificial Intelligence in Biomedical Engineering, 2021
The residual neural networks (ResNet) have been proposed as a solution to the vanishing gradient problem. ResNets utilizes a skip-connection in between neighboring layers. The skip-connection takes the inputs from the previous layer and adds it to the results of the current layer, avoiding computation in the current layer. The skipped connection solves the vanishing gradients problem by letting the gradients propagate through the identity function. The skipped connection can be implemented in almost any CNN structure and proves to be effective in improving CNN training.
AI-Based Classification and Detection of COVID-19 on Medical Images Using Deep Learning
Published in Amit Kumar Tyagi, Ajith Abraham, Recurrent Neural Networks, 2023
Deep residual neural network (ResNet-50) is used for image recognition and classification. This ResNet-50 is a variant of the original ResNet architecture. The ResNet-50 consists of 48 deep convolution layers that are followed by one max-pooling layer and another average pooling layer. This AI-based deep learning model is proved to be the most efficient to study visual images.
Ensemble Architecture for Prediction of Grading of Diabetic Retinopathy
Published in Cybernetics and Systems, 2022
Shruti Jain, Sanket Saxena, Shivam Sinha
A breakthrough CNN network called AlexNet for ImageNet classification was proposed by (Krizhevsky, Sutskever, and Hinton 2012). AlexNet is composed of five CNN layers and three fully connected layers and were the first to implement ReLUs, dropout, and multiple GPUs. Inception V1 (GoogleNet) was introduced in 2014 with the advantage of communicating minimized error rate comparative to additional available CNN networks with a concept of “Network in-network” concept. The Residual Neural Network (ResNet) architecture emerged in 2015, uses the perception of skip connections while maintaining the deepness of the model. It utilizes the batch normalization and gate skip connections concept for improving the network depth. Visual Geometry group networks (VggNet) is a multilayered architecture that was introduced in 2015 due to its improved network depth and simple implementation. This impairment is addressed by the different derivatives of the Inception networks (Inception-V3) model.
A real-time intelligent classification model using machine learning for tunnel surrounding rock and its application
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2023
Junjie Ma, Tianbin Li, Gang Yang, Kunkun Dai, Chunchi Ma, Hao Tang, Gangwei Wang, Jianfeng Wang, Bo Xiao, Lubo Meng
He et al. (2016) proposed the residual neural network (ResNet). The ResNet introduces residual learning, which effectively alleviates the gradient disappearance and network degradation problems caused by the increase of network depth, which can speed up the training of neural networks and can significantly improve the generalisation ability and robustness of deep networks. In this work, 18-layer and 34-layer ResNet are employed to develop the surrounding rock classification model due to the number of samples and model training speed. Figure 4 depicts the network architectures of the ResNet18 and the ResNet34.
A modified version of GoogLeNet for melanoma diagnosis
Published in Journal of Information and Telecommunication, 2021
ResNet (He et al., 2016) which was the 2015 winner of the ImageNet competition is a deeper version of CNN models. While going deeper with convolutions in ResNet, to overcome gradient vanishing problem it involves some shortcuts between different convolution blocks. Because of this technique, ResNet was identified as Residual Neural Network. ResNet became the winner of ImageNet 2015 competition with a top 5 error rate of 3.57%.