Explore chapters and articles related to this topic
Optimal image registration via efficient local stochastic search
Published in João Manuel, R. S. Tavares, R. M. Natal Jorge, Computational Modelling of Objects Represented in Images, 2018
The stochastic search implemented in our study is the simultaneous perturbation stochastic approximation (SPSA) algorithm, which is firstly put forward by J.C. Spall. It becomes seriously popular for solving some challenging optimization problems. The prominent merit of this algorithm is that it does not require an explicit knowledge of the gradient of the object function, or measurements of this gradient. At each iteration, it only needs an approximation to the gradient via simultaneous perturbations. The gradient approximation is based on only two function measurements regardless of the dimensions of the search parameter space. The SPSA algorithm is a very strong optimizer, which can get through some local maxima of the objective function to successfully find the global maximum because of the stochastic nature of the gradient approximation (Spall 2003).
Performance Optimization of Human Control Strategy
Published in Yangsheng Xu, Ka Keung C. Lee, Human Behavior Learning and Transfer, 2005
Stochastic approximation (SA) is a well-known iterative algorithm for finding roots of equations in the presence of noisy measurements through stochastic gradient approximation. SA has potential applications in a number of areas relevant to statistical modeling and control, such as parameter estimation, adaptive control, stochastic optimization, and neural network weight-estimation. The SA algorithm can solve problems where the relationship between the objective function and the solution is very complex. For the reason that we have little a priori knowledge about the structure of human control strategy, no a priori mathematics model can be used for HCS models. Thus to optimize HCS models, some methods such as steepest descent and Newton-Raphson are not suitable. Simultaneously perturbed stochastic approximation (SPSA) is a particular multivariate SA technique which requires as few as two measurements per iteration and shows fast convergence in practice.
*
Published in Ce Zhu, Yuenan Li, Advanced Video Communications over Wireless Networks, 2017
Jane Wei Huang, Hassan Mansour, Vikram Krishnamurthy
The simultaneous perturbation stochastic approximation (SPSA) method [20] is adopted to estimate the parameters. This method is especially efficient in high-dimensional problems in terms of providing a good solution for a relatively small number of measurements for the objective function. The essential feature of SPSA is the underlying gradient approximation, which requires only two objective function measurements per iteration regardless of the dimension of the optimization problem. These two measurements are made by simultaneously varying, in a properly random fashion, all the variables in the problem. The detailed algorithm is described in Algorithm 5.2.
Explainable and secure artificial intelligence: taxonomy, cases of study, learned lessons, challenges and future directions
Published in Enterprise Information Systems, 2023
Khalid A. Eldrandaly, Mohamed Abdel-Basset, Mahmoud Ibrahim, Nabil M. Abdel-Aziz
Simultaneous Perturbation Stochastic Approximation (SPSA) (Spall 1992) was proposed as a technique of gradient-free optimisation that is valuable at what time the model is non-differentiable, or more commonly, the gradients were not pointing to suitable directions. Gradients were approached using determinate difference guesstimates in an accidental direction. Every iteration in SPSA only necessitates just two queries to the aimed model, whereas a big number of iterations are however essential to engender adversarial examples. One step of SPSA did not consistently generate the adversarial examples. Two chief shortcomings are worth mentioning when it comes to SPSA. First, the convergence of SPSA is practically sensitive to the pre-set value of step size of loss optimisation as well as the step size of gradient estimation. Second, with an equal number of queries, the success rate of SPSA is less than those of Gradient Estimation attacks (Cheng et al. 2019) even when the perturbation is more complex.
Adaptive Safe Experimentation Dynamics for Data-Driven Neuroendocrine-PID Control of MIMO Systems
Published in IETE Journal of Research, 2022
Mohd Riduwan bin Ghazali, Mohd Ashraf bin Ahmad, Raja Mohd Taufika bin Raja Ismail
Now, the SPSA algorithm in Section 3.1 is applied to solve the following numerical problem: that has a global minimum at for the dimension of design parameter n = 10. The SPSA gain sequences are set to , and is generated from the Bernoulli vector. The initial design parameter is set as . Figure 2 illustrates the objective function convergence curve of the SPSA algorithm after 30 iterations where it indicates that the SPSA algorithm is unable to converge to the minimum value. This simulation result shows that the SPSA does not always produce a stable convergence in the tuning process.
GPU parameter tuning for tall and skinny dense linear least squares problems
Published in Optimization Methods and Software, 2020
Benjamin Sauk, Nikolaos Ploskas, Nikolaos Sahinidis
Finite difference is commonly used to estimate gradient information. However, in the case of simulation optimization, where simulations have uncertainty, and can be costly, finite differences is not an approach that can be used to identify accurate gradient information. Instead, gradient-based methods use stochastic approximations to generate an estimated gradient to move towards an optimal solution. Simultaneous perturbation stochastic approximation (SPSA) is an algorithm that is able to estimate gradient information with only two function evaluations [63].