Explore chapters and articles related to this topic
Multimodal Ambulatory Fall Risk Assessment in the Era of Big Data
Published in Ervin Sejdić, Tiago H. Falk, Signal Processing and Machine Learning for Biomedical Big Data, 2018
The resilient backpropagation (RPROP) training method for the network was chosen due to its speed among the first-order training algorithms. Weight update depends on the sign of the gradient, defined by Δwi(t)=−sign{∂E∂wit×Δi(t)}, where Δi(t) is an adaptive step specific to weight wi defined as below (ηinc > 1 and ηdec < 1 are scalars). Δi(t)={ηincΔi(t−1),if∂E∂wi(t)×∂E∂wi(t−1)>0ηdecΔi(t−1),if∂E∂wi(t)×∂E∂wi(t−1)<0Δi(t−1),otherwise
On the Use of Gradient-Based Solver and Deep Learning Approach in Hierarchical Control: Application to Grand Refrigerators
Published in Cybernetics and Systems, 2023
Xuan-Huy Pham, François Bonne, Mazen Alamir
After the data are gathered, the data pre-processing techniques are proceeded, such that: data balancing, data normalization, data Shuffling and data splitting. Once the data is ready, three feed-forward neural networks are trained. These configurations are set up so that each DNN has a different number of hidden layers, ranging from 1 to 3 hidden layers, with each layer having the same number of nodes, i.e., 25 nodes, denoted by NN-1-25, NN-2-25, and NN-3-25, respectively. The activation function at each node is the sigmoid function (other activation functions have been used but do not give any better performance). Concretely, each structure is trained for 10000 epochs with the prepared data set and is validated with the validation data set. The resilient back-propagation (RPROP) algorithm is used to train the neural network. Table 2 presents the learning performance for three DNN structures. The structure NN-2-25, which has the lowest mean squared error (MSE) is chosen to conduct the next simulation.
Properties prediction of environmentally friendly ultra-high-performance concrete using artificial neural networks
Published in European Journal of Environmental and Civil Engineering, 2022
Joaquín Abellán García, Jaime Fernández Gómez, Nancy Torres Castellanos
Back Propagation and its variations are widely used as methods for training artificial neural networks. One such variation, Resilient Back Propagation (Rprop), has proven to be one of the best in terms of speed of convergence (Naoum & Al-Sultani, 2013). Resilient propagation and back propagation are very much similar, except for the weight update routine (Prasad et al., 2013). Resilient propagation does not take into account the value of the partial derivative (error gradient), but rather, considers only the sign of the error gradient to indicate the direction of the weight update. Back propagation is slow at converging due to the gradients having a very small magnitude, which causes small changes in weight. The purpose of Rprop training algorithm is to eliminate the harmful effects of these magnitudes of the partial derivatives. Only the sign of the derivative can determine the direction of the weight update; the magnitude of the derivative has no effect on the weight update. Another most difficult aspect of the back propagation learning was picking the correct training parameters. Resilient propagation does have training parameters, but it is extremely rare that they need to be changed from their default values. This makes resilient propagation a very easy way to use a training algorithm. It also has the nice property that it requires only a modest increase in memory requirements. As consequence Rprop is one of the fastest training algorithms available (Mushgil et al., 2015).
Variable Selection for Artificial Neural Networks with Applications for Stock Price Prediction
Published in Applied Artificial Intelligence, 2019
To improve the computational efficiency, a variation of backpropagation, called the resilient backpropagation algorithm (RPROP) (Riedmiller and Braun 1992), is applied. It takes into account the sign of the partial derivatives of the total cost function. At each iteration step in the gradient descent, if there is a change in the sign of the partial derivatives compared to the last step, the learning rate of the gradient descent is set at and if there is no change in the sign, the learning rate is set at . The algorithm converges faster than the traditional backpropagation algorithm.