Explore chapters and articles related to this topic
Matlab in systems and controls
Published in Paul W. Ross, The Handbook of Software for Engineers and Scientists, 2018
Matlab provides a complete suite of optimal control techniques, many of which reside in the advanced toolboxes, such as the Robust Control Toolbox. The most basic form of optimal control is the LQR, or Linear Quadratic Regulator. The LQR controller gives a state-feedback matrix K that results in the system minimizing the infinite-time cost function () ∫=∫0∞(xTQx+uTRu)dtwhere Q and R are designer-chosen weighting matrices. (Matlab can automatically handle “cross-terms,” too.)
STRUCTURAL RELIABILITY ANALYSIS OF ACTIVE CONTROL BASED ON GROUND VELOCITY
Published in W. Q. Zhu, G.Q. Cai, R.C. Zhang, Advances in Stochastic Structural Dynamics, 2003
Research efforts in control algorithm have been focused on a variety of control algorithms based on several control design criteria. These algorithms include optimal control [Soong 1990], adaptive control [Chalam 1987], stochastic control jAstrom 1970]. predictive control |Rodellar et al. 1987], sliding mode control [Yang 1994], intelligent control [Chong et al. 1990], and etc. Among these, the optimal control algorithm seems to be more efficient since it derives from minimization of cost functions. However, in practical structural control, the optimal control may still be inappropriate since the optimal control forces may be delayed due to practical problems such as time delay. Therefore, it is desirable if the control force requirement can be sent out by
Control Methods
Published in Michael Muhlmeyer, Shaurya Agarwal, Information Spread in a Social Media Age, 2021
Michael Muhlmeyer, Shaurya Agarwal
The idea behind optimal control is to find a system's ideal performance by defining the system criteria and optimizing performance measures. A control problem consists of a cost function or objective function, system dynamics, and constraints on state and control variables. The objective is to either minimize or maximize the cost function (made up of state and control variables) subject to certain constraints. Optimal control can be used in different disciplines, including biomedical devices, communication networks, or economic systems. Geometric or numerical approaches can be used to approach optimization problems. And, the nature of variables associated with optimization can be real numbers, integers, or combinations of both.
Simplified optimised prescribed performance control for high-order multiagent systems with privacy preservation
Published in International Journal of Systems Science, 2023
Min Wang, Hongjing Liang, Liang Cao, Hongru Ren
The optimal control objective is to achieve the control task while minimising the cost function. The optimal solution of the optimised controller was obtained by solving the Hamilton–Jacobi–Bellman (HJB) equation in Lewis et al. (2012). However, it is difficult to solve the HJB equation due to its inherent nonlinearity and intractability. To circumvent the above problem, the function approximation strategy based on reinforcement learning (RL) has been successfully used in adaptive optimisation control. Bai et al. (2020) proposed a multi-gradient recursive RL scheme to eliminate the partial optimal problem. In Wen et al. (2021), a simplified optimal backstepping (OB) technique was proposed. Wen et al. proposed a tracking control method for surface warship, and the overall optimised control was achieved by utilising the OB technique to designed virtual and actual controllers of heterogeneous subsystems in Wen et al. (2019). Therefore, it is meaningful to investigate the optimisation of backstepping technique for the MASs with power exponential functions.
Analytical sensitivity computation using collocation method with non-uniform mesh discretisation for numerical solutions of optimal control problems
Published in International Journal of Control, 2021
Xinggao Liu, Long Xiao, Lu Lv, Ping Liu, Guoqiang Xu, Zongzhun Zheng, Sen Wang
Optimal control has received considerable scientific and industrial attention due to its frequent application in various fields such as medicine, chemical process and aerospace engineering (Betts, Campbell, & Thompson, 2016; Cuthrell & Biegler, 1986; Jie, Huang, & Lou, 2010; Liu, Hu, Feng, & Liu, 2014; Liu, Li, et al., 2018; Marti, 2012). Reliable and efficient optimisation techniques become very critical for optimal control problems (OCPs). There are several approaches to compute the solutions of OCPs. The so-called indirect method is based on the derivation of first-order optimality conditions of variation, which is cumbersome for complicated problems with constraints in practical problems (Xiong & Zhang, 2005). The direct methods using numerical techniques of nonlinear programming (NLP) are adjusted to optimise the performance index directly, where the first-order optimality conditions are unnecessary (Betts, 2012; Ren, Xu, Lin, Loxton, & Teo, 2016; Sabeh, Shamsi, & Dehghan, 2016). In the last two decades, the direct methods have become popular in several areas due to their high accuracy in approximating non-analytic solutions with relatively few discretisation parameters (Biegler, 2007; Xiao, Liu, Ma, & Zhang, 2018; Yu, Cheng, Jiang, Zhang, & Xu, 2014).
Robust optimal control for a class of nonlinear systems with uncertainties and external disturbances based on SDRE
Published in Cogent Engineering, 2018
Mohammad Khoshhal Rudposhti, Mohammad Ali Nekoui, Mohammad Teshnehlab
By examining nonlinear systems, it can be realized that there is no single optimal control law for controlling those systems, and usually the scientists attempt to find a solution of near optimum, which can be acceptable. Thus, According to the conditions of the problem, a solution should be selected that has the closest response to the optimal value. The purpose of optimal control is to determine the control laws, which satisfy the physical constraints and at the same time minimize or maximize the certain performance index. The performance index can be defined based on the requirements from the effective physical factors and the expectation from a system.