Explore chapters and articles related to this topic
Self-Tuning Regulators
Published in V. V. Chalam, Adaptive Control Systems, 2017
Other recursive parameter estimation schemes can be represented by equations such as (5.22). However, it has been found that RLS, ELS, and RML methods are best suited for adaptive control [95]. The recursive instrumental variable (RIV) method yields biased parameter estimates in closed loop. Stochastic approximation (STA) method converges slowly and unreliably [94]. A square-root algorithm developed by Peterka [96] may be used, if numerical ill-conditioning of the updating procedure for P(t) occurs. The UD algorithm due to Biermann [83] is computationally efficient to overcome this difficulty. Unton [97] has discussed a recursive estimator for the general case. Unbehauen [97a] has surveyed different system identification methods using parameter estimation. Theoretical and practical aspects including modern trends as well as still unsolved problems are considered. The parameter estimation algorithm is the principal computation burden in self-tuning and so we must choose the simplest and most robust algorithm.
General introduction
Published in Adedeji B. Badiru, Handbook of Industrial and Systems Engineering, 2013
Stochastic approximation is a scheme for successive approximation of a sought quantity when the observation and the system dynamics involve random errors (Albert and Gardner, 1967). It is applicable to the statistical problem of
Adaptive Learning Algorithm Convergence
Published in Richard M. Golden, Statistical Machine Learning, 2020
Stochastic approximation algorithms provide a methodology for the analysis and design of adaptive learning machines. Assume the statistical environment for an adaptive learning machine generates a sequence of training stimuli x(1), x(2), … which are presumed to be a realization of a sequence of independent and identically distributed random vectors x~(1),x~(2),… with common density pe:Rd→[0,∞). It is assumed that the goal of the adaptive learning machine is to minimize the risk function ℓ:Rq→R defined such that: ℓ(θ)=∫c(x,θ)pe(x)dν(x) where c(x, θ) is the penalty incurred by the learning machine for having the parameter vector θ when the environmental event x is presented to the learning machine.
A new information-weighted recursive algorithm for time-varying systems: application to UAV system identification
Published in International Journal of Systems Science, 2018
Zun Liu, Honghai Ji, Hailong Pei, Frank L. Lewis
System identification techniques used for dealing with time-varying systems can be divided into four classes. One is the method of stochastic approximation (Albert & Gardner, 1967), including the Kiefer–Wolfowitz Algorithm, the Robbins–Monro algorithm, Least Mean-Squares Algorithm etc. Despite less computational effort for the recursive type algorithms, rigorous convergence analyses for such algorithms are difficult to provide. In the stochastic framework, there further emerge some additional methods based on Bayesian inference (Golabi, Meskin, Tóth, & Mohammadpour, 2017; Hsiao, 2008), Markov processes (Pillonetto, 2008), etc. The second class is Parameter Estimation (PE) in observer form or state-space representation, the most well-known of which is the Kalman filter (Crassidis & Junkins, 2011; Lewis et al., 2010; Niedźwiecki, 2008). The third class is based on basis function representations (Li, Wei, & Billings, 2011; Tóth, Heuberger, & Hof, 2009). The last class includes parameter estimators in least-squares form (Goodwin & Sin, 1984), including RLS, and also instrumental variable method (IV method).
The application of machine learning and deep learning in sport: predicting NBA players’ performance and popularity
Published in Journal of Information and Telecommunication, 2022
Nguyen Hoang Nguyen, Duy Thien An Nguyen, Bingkun Ma, Jiang Hu
Stochastic gradient descent optimizes the objective function by relevant smoothness properties as an iterative progress. The algorithm replaces calculating the entire dataset (actual gradient) by calculating a randomly selected subset of the data (estimation), so it is considered as stochastic approximation of gradient descent optimization. Thus, for high-dimensional optimization problems, this method supports faster iterations and reduces the computational burden but achieves a lower convergence rate (Bottou & Bousquet, 2012)
Distributed Intelligent Model for Privacy and Secrecy in Preschool Education
Published in Applied Artificial Intelligence, 2023
The specific method used to estimate the gradient depends on the algorithm and the problem at hand. Common techniques include stochastic approximation methods, such as stochastic gradient descent, where the gradient is estimated based on a randomly selected subset of the available data.