Explore chapters and articles related to this topic
Introduction
Published in Guoliang Wei, Zidong Wang, Wei Qian, Nonlinear Stochastic Control and Filtering with Engineering-Oriented Complexities, 2016
Guoliang Wei, Zidong Wang, Wei Qian
Owing to pervasive existence of stochastic perturbations in reality, stochastic models have been successfully utilized to describe many practical systems, such as mechanical systems, economic systems, and biological systems, etc. It’s notable that the stochasticity is one of the main sources of degrading the system performance, and it also poses significant challenges for the analysis and synthesis of practical systems. It is not surprising that, over the past few decades, the study of stabilization, control, and filtering problems for stochastic systems has been paid much attention from many researchers and a large number of results have been reported in the literature; see, e.g., [7, 14, 41, 73, 74, 88, 147, 184, 188, 220]. Among them, there are two most stunning achievements developed specifically for the stochastic systems. One of them is the linear quadratic Gaussian control [7], and another one is the famed Kalman filter, which was developed by Kalman in 1960s [73]. Both of them have found numerous applications in a variety of areas.
Hamiltonian Method for Steady State Optimal Control and Filtering
Published in Zoran Gajić, Myo-Taeg Lim, Dobrila Škatarić, Wu-Chung Su, Vojislav Kecman, Optimal Control, 2018
Zoran Gajić, Myo-Taeg Lim, Dobrila Škatarić, Wu-Chung Su, Vojislav Kecman
In the filtering problem, in addition of using duality between filter and regulator to solve the discrete-time filter algebraic Riccati equation in terms of reduced-order continuous-time algebraic Riccati equations, we have obtained completely independent reduced-order Kalman filters both driven by system measurements and system optimal control inputs. In the last part of this section, we use the separation principle to solve the linear-quadratic Gaussian control problem of weakly coupled discrete stochastic systems. Two real control system examples are solved in order to demonstrate the proposed methods.
Control of quasi non-integrable Hamiltonian systems for targeting a specified stationary probability density
Published in International Journal of Control, 2019
Stochastic control has been studied for decades with a variety of application areas including supply-chain optimisation, advertising, finance, dynamic resource allocation, caching, manufacturing and traditional automatic control, etc. An extremely well-studied control strategy is linear quadratic Gaussian control. For linear system with external Gaussian white noise, usually, the output is Gaussian. The control design targets are usually mean and variance or covariance (Åström & Wittenmark, 1980; Lu & Skelton, 1998; Skelton, Iwasaki, & Grigoriadis, 1998; Wojtkiewicz & Bergman, 2001). Since a lot of systems are nonlinear or not linearisable and the output of a nonlinear stochastic system is usually non-Gaussian, mean and variance or covariance are not enough to characterise the output process.
Robust control of the circular restricted three-body problem with drag
Published in International Journal of Control, 2022
David J. N. Limebeer, Deon Sabatta
The control of halo orbits has its genesis in the 1970s, with mission objectives focussed on the sustenance of periodic orbits around one of the Earth–Moon collinear Lagrange points. In the early literature impulse and state feedback control are suggested as ways of stabilising and maintaining periodic orbits (Farquhar, 1971). In the development of the feedback control theme, a Linear Quadratic Gaussian control scheme is proposed in Breakwell et al. (1974). In this work, the control law is found by solving the Hamilton–Jacobi–Bellman equation – in our opinion, the numerics of this approach are questionable. Halo mission correction using optimal control is described in Serbana et al. (2002). A more recent, and philosophically similar approach to that described in Breakwell et al. (1974), is given in Kulkarni et al. (2006), where an approach is suggested, with the control law derived from linear matrix inequalities (LMIs). These authors exploit the fact that the problem is periodic in order to obtain finite-dimensional LMIs. A comprehensive survey of station-keeping methodologies for halo orbits can be found in Shirobokov et al. (2017). This review covers the period from 1970 to 2017 and addresses the problem of station keeping for orbits near the and Lagrange points. The control techniques surveyed include open-loop optimal control, and feedback control, sliding mode control, and several others.
Pseudo-spectral optimal control of stochastic processes using Fokker Planck equation
Published in Cogent Engineering, 2019
Ali Namadchian, Mehdi Ramezani
Stochastic optimal control is one of the main subfields of control theory. It is introduced due to considering more realistic models where many systems in different branches of science are subjected to randomness. It is the subject of study in traffic control (Zhong, Sumalee, Pan, & Lam, 2014), airspace industry (Okamoto & Tsuchiya, 2015), cancer chemotherapy (Coldman & Murray, 2000), cyber security systems in computer science (Shang, 2013, 2012), etc. When the system randomness is bounded and the bounds are known, the problem of finding a suitable control action can be dealt with robust control approaches (Doyle, Francis, & Tannenbaum, 2013; Wu & Lee, 2003; Zhou & Doyle, 1998). While the bounds of uncertainty are unknown and the probability distribution of the randomness is available, the stochastic framework should be carried out in optimal control (Åström, 2012; Herzallah, 2018; Sell, Weinberger, & Fleming, 1988; Touzi, 2012). There is a vast amount of literature on stochastic optimal control of linear systems with additive noise where the certainty equivalence property leads to the separation principle in stochastic control (Bar-Shalom & Tse, 1974; Mohammadkhani, Bayat, & Jalali, 2017). Under certain conditions, separation principle asserts that finding a control action for a stochastic system can be restated as designing a stable observer and a stable controller. In the other words, a stochastic problem can be recast as two deterministic problems. Linear quadratic Gaussian control (LQG) is one of the most prominent stochastic control methods. It is simply the combination of Kalman filter as an observer and a linear quadratic controller.