Explore chapters and articles related to this topic
Lung Tumor Segmentation Using a 3D Densely Connected Convolutional Neural Network
Published in Mohan Lal Kolhe, Kailash J. Karande, Sampat G. Deshmukh, Artificial Intelligence, Internet of Things (IoT) and Smart Materials for Energy Applications, 2023
Shweta Tyagi, Sanjay N. Talbar
The 3D convolutional layers with kernel size 3 × 3 × 3 are used in both down-sampling and up-sampling part. The activation function ReLU is used in each convolutional layer except the last layer. ReLU is a rectified linear activation function that provides the output if input is positive, otherwise provides zero, and it helps in tackling vanishing gradient problem. Also, batch normalization is used after each convolutional layer to overcome the effect of internal covariate shift, which further helps in reducing the overfitting of the network. A dropout of 0.1 is used in the encoder, which also helps to reduce overfitting. The 3D max-pooling layers with pool size 2 × 2 × 2 are used in the encoder part to down-sample the feature map. And in decoder part, same pool size 3D up-sample layers are used for up-sampling the features. The architecture of proposed network is shown in Figure 3.4. An input image of size 256 × 256 × 16 is given to the network. The high-level features are generated by the encoder block, which are up-sampled by the decoder block to generate the final tumor masks. The skip connections are provided between encoder and decoder parts to preserve the high-level information of the tumor. Dense connections are provided between convolutional layer, which means that feature maps from all prior layers will be passed as input to each layer, and each layer’s output feature maps will be passed as input to all successive layers. This helps the network to learn the feature maps more accurately, and the performance of the network will be high.
Deep learning based indirect monitoring to identify bridge natural frequencies using sensors on a passing train
Published in Joan-Ramon Casas, Dan M. Frangopol, Jose Turmo, Bridge Safety, Maintenance, Management, Life-Cycle, Resilience and Sustainability, 2022
S.R. Lorenzen, H. Berthold, M. Rupp, L. Schmeiser, E. Apostolidi, J. Schneider, J. Brötzmann, C.-D. Thiele, U. Rüppel
In recent years, the area of machine learning and, in particular, the importance of deep learning has increased greatly. In this context, neural networks have become a core element of modern machine learning. There are many different approaches for designing a neural network. In this paper, a fully connected neural network is used, shown in Figure 9.The model consists of six fully connected layers with descending numbers of neurons with an input size of 1,000 neurons. The Rectified Linear Unit (ReLU) is used as the non-linear activation function. In the last layer, linear regression is applied predicting the natural frequency. To achieve a better generalisation of the results, dropout layers are added before each fully connected layer, which deactivate a certain percentage of the neurons per training data vector by setting the weighting to zero. This prevents overfitting of the network to the training data set. The percentage of deactivated neurons is 20 per cent in each dropout layer. In addition, for a better training result, batch normalization layers are used to normalize each batch of the training data set. The dropout as well as the batch normalization layers are only used for training the model. These layers are deactivated for validation and testing.
Automated Epilepsy Seizure Detection from EEG Signals Using Deep CNN Model
Published in Rohit Raja, Sandeep Kumar, Shilpa Rani, K. Ramya Laxmi, Artificial Intelligence and Machine Learning in 2D/3D Medical Image Processing, 2020
Saroj Kumar Pandey, Rekh Ram Janghel, Archana Verma, Kshitiz Varma, Pankaj Kumar Mishra
ReLU Function: In non-linear operation, ReLU is the abbreviation for Rectified Linear Unit. The function of ReLU is to map a negative value to zero and hence have all positive values so that the training can be fast and more effective [33]. It applies element-wise activation function [34, 35, 36], whose output is fy=0,&y<0y,&y≥0
Evaluation of deep learning models for classification of asphalt pavement distresses
Published in International Journal of Pavement Engineering, 2023
Alex Apeagyei, Toyosi Elijah Ademolake, Mark Adom-Asamoah
A DCNN can have tens or hundreds of layers, with each layer learning to detect different features. The output of each convolved image is used as the input to the next layer. The filters or kernels initially detects very simple features, such as brightness or edges. More complex features that uniquely define the object are detected with deeper layers. Both the ReLU and the pooling layers are used to improve computational efficiency. By maintaining positive values while mapping negative values to zero, the ReLU permits quicker and more effective training. A pooling layer works by performing non-linear down sampling and reducing the number of parameters that the network needs to learn. Flattening layers converts the network’s 2-dimensional spatial features into 1D vector of image-level features for image classification purposes. SoftMax provides probabilities for each category in the dataset (Bengio et al. 2013, 2015, Schmidhuber 2015).
Investigation and prediction of hybrid composite leaf spring using deep neural network based rat swarm optimization
Published in Mechanics Based Design of Structures and Machines, 2023
Rohit Raghunath Ghadge, S. Prakash
In this work, prediction is achieved using both DNN and ANN with RSO based optimized weight values. In which, the DNN based prediction process is achieved using different prediction models namely, tanh, sigmoid and ReLU. Radial basis function (RBF) is used as an activation function for ANN. The comparison of training accuracy with the different activation functions are given in the Figure 13(a,b). RBF function is only used in the neural network with single hidden layer and it promotes linear output. Although, its training speed is similar to the other activation functions, the accuracy during prediction is less when using complex datasets (Stoffel et al. 2020). From the graph it is clear that, ReLU promotes better accuracy with minimal training loss than the rest of the activation functions namely tanh, sigmoid and RBF functions. This is because, the entire neurons are not activated at the same time when using ReLU. In hyperbolic tangent and sigmoid activation functions, entire neurons to be activated at the same time, so the activation becomes intensive due to the random activation of neurons. Besides ReLU is more computationally efficient because of its simple operational derivative max (0, x). Unlike tanh and sigmoid, it does not perform the exponential based operations. The exponential operation makes highly computational intensive with slower convergence.
An automated liver tumour segmentation and classification model by deep learning based approaches
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Sayan Saha Roy, Shraban Roy, Prithwijit Mukherjee, Anisha Halder Roy
The VGG16-based model’s output layer consists of 4096 CT image features. These 4096 characteristics were used as inputs for the ANN’s input layer. The activation function of a node determines the output of that node based on its inputs in an ANN (https://machinelearningmastery.com/choose-an-activation-function-for-deep-learning/). In neural network models, the Rectified Linear Unit (Relu) is one of the most often utilised nonlinear activation functions. For any negative input, it returns ’0’, and for each positive input, it returns the same positive value. ‘Relu’ is defined as f(x) = max(0,x). In ANN models, the term ’sigmoid’ is frequently used as an activation function. It is known as the logistic function. It is defined as f(x) =.