Explore chapters and articles related to this topic
Integrated Photonics for Artificial Intelligence Applications
Published in Sandeep Saini, Kusum Lata, G.R. Sinha, VLSI and Hardware Implementations Using Modern Machine Learning Methods, 2021
Ankur Saharia, Kamal Kishor Choure, Nitesh Mudgal, Rahul Pandey, Dinesh Bhatia, Manish Tiwari, Ghanshyam Singh
The multilayer perceptron consists of a series of interconnected layers of neurons, as shown in Figure 15.2. The input is provided to an input layer, followed by the hidden layers, and finally the output is taken from the output layer. It can have one or more hidden layers. The connection between the neurons is done with the help of weights. The output is the weighted sum of input to the neurons, along with the modified activation function. The neurons of hidden layers are connected with each neuron of previous and successive layers, as shown. Due to the flow of information from one layer to another successive layer, it is also called a feedforward neural network. The main purpose of the input layer is to pass the input to other connected layers instead of performing computation. To get the calculated correct output from the input, there should be the proper selection of weight and transfer function. Training data is required for supervised learning in multilayer perceptron [19].Convolutional Neural Networks
Neural Network Architectures
Published in Bogdan M. Wilamowski, J. David Irwin, Intelligent Systems, 2018
Different neural network architectures are widely described in the literature [W89,Z95,W96,WJK99,H99,WB01,W07]. The feedforward neural networks allow only for one directional signal flow. Furthermore, most of the feedforward neural networks are organized in layers. An example of the three layer feedforward neural network is shown in Figure 6.1. This network consists of three input nodes: two hidden layers and an output layer. Typical activation functions are shown in Figure 6.2. These continuous activation functions allow for the gradient-based training of multilayer networks. Usually it is difficult to predict required size of neural networks. Often it is done by trial and error method. Another approach would be to start with much larger than required neural network and to reduce its size by applying one of pruning algorithms [FF02,FFN01,FFJC09].
Computer Vision Methodologies for Automated Processing of Camera Trap Data
Published in Yuhong He, Qihao Weng, High Spatial Resolution Remote Sensing, 2018
Joshua Seltzer, Michael Guerzhoy, Monika Havelka
A feedforward neural network is a set of interconnected nodes (inspired by neural connections in the brain), with directed weighted connections. A feedforward neural network can be used for probabilistic categorization of inputs. The design of neural networks has sometimes been inspired by cognitive science. Computational methods, notably stochastic gradient descent, exist to find weights for a feedforward neural network such that the network is able classify instances in the training set. This process is referred to as training the neural network. A training set contains input (often images) with matching labels, or categories. When training, based on the network's success rate in guessing the correct label, its connections are modified to better optimize its accuracy. Neural networks learn by modifying connections throughout between the nodes in the network.
Resilient modulus descriptive analysis and estimation for fine-grained soils using multivariate and machine learning methods
Published in International Journal of Pavement Engineering, 2022
Chijioke Christopher Ikeagwuani, Donald Chimobi Nwonu, Chukwuebuka C. Nweke
Artificial neural network (ANN) is a computational network that is biologically inspired. It was proposed by McCulloch and Pitts (McCulloch and Pitts 1943) in 1943. ANN follows a procedure that mimics the roles of the brain and the nervous system using a node-neuron structure connected by synapses, which is used to create non-parametric models (Ranasinghe et al.2017). These non-parametric models, in constrast to most conventional statistical models, create coefficients or parameters that are not explicit. The coefficients or parameters are embedded implicitly in the models. Interestingly, there are a number of ANN types (Hanandeh et al.2020, Rashid 2016), some of which are the recurrent neural network, feedforward neural network, recursive neural network, convolutional neural network, etc. The feedforward neural network is currently the most used ANN type. The feedforward neural network can either have multiple layers or a single layer. The ones with multiple layers are described simply as multi-layer perceptron (MLP) while the ones with a single layer are called single layer perceptron (SLP) (Gunaydin 2009, Kuo et al.2009). The MLP is used in this study.
An Artificial Neural Network for Predicting the Near-fault Directivity-pulse Period
Published in Journal of Earthquake Engineering, 2022
Nasrollah Eftekhari, Milad Kowsari, Hadi Sayyadpour
The MLP is capable of approximating any continuous function to an arbitrary degree of accuracy (Hornik, Stinchcombe, and White 1989). This feedforward neural network consists of multiple layers (i.e., input layer, one or more hidden layers and output layer) of neurons that interact using weighted connections (McClelland, Rumelhart, and Research Group 1986). The neural network learns by correcting the weights of the neurons in response to the errors between the output and target values through a back-propagation learning algorithm (Goh 1995; Rumelhart, Hinton, and Williams 1985). In the back-propagation learning algorithm, the error computed at the output layer is propagated backward through the hidden layer(s) to the inputs layer and then the weights are modified until the convergence is occurred and the error is minimized (Hecht-Nielsen 1992; Werbos 1990).
On the Performance Assessment of ANN and Spotted Hyena Optimized ANN to Predict the Spontaneous Combustion Liability of Coal
Published in Combustion Science and Technology, 2022
Abiodun Ismail Lawal, Moshood Onifade, Jibril Abdulsalam, Adeyemi Emman Aladejare, Abisola Risiwat Gbadamosi, Khadija Omar Said
Three spontaneous combustion liability indices such as XPT, Wits-Ehac, and FCC are predicted using ANN and Spotted Hyena optimized ANN (SHO-ANN). The proposed ANN model was developed using 68 datasets. The datasets were divided into 70% training, 15% testing, and 15% validation datasets respectively. The feedforward neural network was used with a backpropagation training algorithm. For the determination of the optimum ANN network suitable for the predictions of the XPT, Wits-Ehac, and FCC, the trial and error approach was used. Both the four layers and three layers of neural network architecture with a different number of neurons were tried. A typical example of the strategy used in selecting the optimum ANN architecture is presented in Table 1 and the optimum ANN architectures selected for XPT and Wits-Ehac indices based on Table 1 strategy is presented in Table 2. The hyperbolic tangent transfer function (Eq. (3b)) was used in both the two hidden layers for the four layers network, while the purelin transfer function was used in the output layer. However, in the case of three layers ANN structure, the hyperbolic tangent transfer function (Eq. (3b)) was used in both the hidden layer and the output layer. The four-layer ANN architecture as typically shown in Figure 4 gives the best predictions in all the three cases.