Explore chapters and articles related to this topic
A neural network framework for unsaturated soils
Published in H. Rahardjo, D.G. Toll, E.C. Leong, Unsaturated Soils for Asia, 2020
G. Habibagahi, S. Katebi, A. Johari
In multi-layer perceptron network, artificial neurons are arranged in layers with full connections of each neuron to all neurons of the next layer. This type of network consists of an input layer, a hidden layer (or hidden layers), and an output layer. The neurons in the input layer represent the number of input variables considered, while the output neurons identify the desired outputs. Each neuron in the network has an activation function, usually expressed by a sigmoid function. Weights are assigned to all the connections inside the network and the goal is to find the optimum values of these weights such that the error measure of the network is minimized. The error measure is expressed in terms of the mean sum squared of the errors (MSSE) expressed by MSSE=1NK∑p=1N∑i=1K(tip−oip)2
Deep learning background
Published in Rémi Cresson, Deep Learning for Remote Sensing Images with Open Source Software, 2020
Deep learning refers to artificial neural networks with deep neuronal layers (i.e. a lot of layers). Artificial neurons and edges have parameters that adjust as learning proceeds. Inspired from biology, an artificial neuron is a mathematical function modeling a neuron. Neuron parameters (sometimes called weights) are values that are optimized during the training step. Equation 1.1 provides an example of a basic artificial neuron model. In this equation, X is the input, y the output, A the values for the gains, b is the offset value, and f is the activation function. In this minimal example, the parameters of the artificial neuron are gains and one offset value. Gains compute the scalar product with the input of the neuron, and the offset is added to the scalar product. The resulting value is passed into a non-linear function, frequently called the activation function. y=f(AX+b)=f[∑i(ai×xi)+b]
A machine learning approach to assess vessel performance based on operational profile
Published in Petar Georgiev, C. Guedes Soares, Sustainable Development and Innovations in Marine Technologies, 2019
A. Senteris, A. Kanellopoulou, G.N. Zaraphonitis
Artificial Neural Networks (ANN) are computing systems arranged in layers communicating with each other through neural synapses, trained to perform tasks by considering observed examples and input parameters with unknown impact to the final output. Training is an iterative process, performed by applied algorithms, called training algorithms, the function of which, is to modify the synaptic weights of the network’s components in order to attain the desired design objective. To achieve improved performance and accuracy, neural networks employ massive interconnections of simple computing cells referred to as “neurons” or processing units (Haykin, 1999). Each neuron connection includes a synaptic weight attributed to it, which alters the transmitted signal from one artificial neuron output to the input of the next neuron. An artificial neuron, receives a signal, processes it and then transmits a new output signal to artificial neurons connected to it while the weight adjusts the neurons’ impact as the learning process proceeds. Artificial neural networks can be categorized according to their structure and properties. In the course of this study, feed forward multi-layer perceptrons are used. Feed-forward describes the flow of data. As a result, in feed forward MLPs, the information moves in only one direction, forward, from the input nodes, through the hidden nodes-if any-and to the output nodes. There are no cycles or loops in the network. A schematic representation of the described neural network is demonstrated in Figure 2.
Predicting quality parameters of denim fabrics using developed ANN based Artificial Bee Colony algorithm
Published in The Journal of The Textile Institute, 2023
Gözde Katırcıoğlu, Emel Kızılkaya Aydoğan, Yılmaz Delice, Esra Akgül
The activation function determines the output of the artificial neuron by processing the input from the summing function. It is also known as the transfer function. Different functions can be used for this calculation such as sigmoid function, step function, log-sigmoid transfer function, and hyperbolic tangent sigmoid transfer function. The output layer is the result of the neural neuron. The output layer of our artificial neural network contains 1 neuron. The artificial neural network was run separately for 14 different determined quality parameters (fabric weight, fabric weight (after washing), fabric width, fabric width (after washing), tear strength (warp), tear strength (weft), tensile strength (warp), tensile strength (weft), shrinkage (warp), shrinkage (weft), rubbing fastness (dry), rubbing fastness (wet), elasticity and growth) and output was obtained.
Speeding up Composite Differential Evolution for structural optimization using neural networks
Published in Journal of Information and Telecommunication, 2022
Artificial neural network (ANN) is known as one of the most powerful computational paradigms. This model was first designed in 1958 based on the understanding of the human brain's structure (Rosenblatt, 1958). In biological brains, there are billions of neurons connected to each other through synapses. The role of synapses is to transmit the electrical signal to other neurons. Similarly, ANNs are composed of nodes that simulate neurons and connections that imitate the synapses of the brain. An artificial neuron receives inputs from previous neurons, transforms them by the activation function, and sends them to the following neurons via connections. Each connection is assigned a weight to represent the magnitude of the signal. The activation function attached to artificial neurons is frequently nonlinear, allowing ANNs to capture complex data. Until now, many architectures of ANN have been introduced, for example, feed-forward neural networks (FFNN), convolution neural networks (CNN), recurrent neural networks (RNN), etc. Each architecture of ANNs is designed for a specific task. CNN is primarily used for tasks related to image processing, and RNN is suitable for time series data. In the field of structural engineering, FFNN is commonly used.
Predicting the dynamic modulus of asphalt mixture using hybridized artificial neural network and grey wolf optimizer
Published in International Journal of Pavement Engineering, 2021
Emadaldin Mohammadi Golafshani, Ali Behnood, Mohammad M. Karimi
Neural network is the primary element of the brain to perform conscious and unconscious activities. Inspired by the neural network, ANN has been developed to solve supervised and unsupervised problems. Regression problem, for which the system output is a continuous variable(s), is an example of the supervised problem. The basic element of an ANN is an artificial neuron in which all ANN computations, including linear and non-linear ones, are carried out. As illustrated in Figure 2, an artificial neuron receives the weighted inputs from its previous neurons or nodes, sums them, computes the activated value using an activation function, and transfers the output to the next neurons (Gandomi and Roke 2015). Various types of activation functions are used in the regression problems, including linear, log-sigmoid, tangent sigmoid, and hyperbolic tangent (Golafshani et al. 2012, Dantas et al. 2013, Behnood and Golafshani 2018).