Explore chapters and articles related to this topic
System-Level Power Management
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC System Design, Verification, and Testing, 2018
Naehyuck Chang, Enrico Macii, Massimo Poncino, Vivek Tiwari
Stochastic control techniques formulate policy optimization as an optimization problem under the uncertainty of the system as well as of the workload. They assume that both the system and the workload can be modeled as Markov chains, and offer significant improvement over previous power management techniques in terms of theoretical foundations and of robustness of the system model. Using stochastic techniques allows one to (1) model the uncertainty in the system power consumption and the state-transition time; (2) model complex systems with multiple power states; and (3) compute globally optimum power management policies, which minimize the energy consumption under performance constraints or maximize the performance under power constraints.
*
Published in Derek A. Linkens, CAD for Control Systems, 2020
James H. Taylor, Magnus Rimvall, Hunt A. Sutherland
This list is not all-inclusive from a control-theoretic point of view or from the standpoint of practical functionality. For example, obvious areas that can be extended are stochastic control (e.g., Monte Carlo analysis capability); nonlinear systems analysis and design (e.g., describing function methods, bifurcation analysis); optimization; discrete-event systems modeling, analysis and design; system model identification; and automatic controller code generation. Furthermore, new approaches and theories are being developed on a continuing basis, making closure impossible.
Vibration Control
Published in Haym Benaroya, Mark Nagurka, Seon Han, Mechanical Vibration, 2017
Haym Benaroya, Mark Nagurka, Seon Han
Stochastic control is a specialty area of controls engineering that addresses the analysis and design of control systems in which the input, state, and/or output variables change randomly. A stochastic control problem must account for the probability of uncertainty in the control system variables. It may also include random noise and disturbances in the closed-loop system to determine their effects on the model and the controller.47, 48
Biological population management based on a Hamilton–Jacobi–Bellman equation with boundary blow up
Published in International Journal of Control, 2022
Hidekazu Yoshioka, Yuta Yaegashi, Yumi Yoshioka, Masayuki Fujihara
Optimal control is a powerful mathematical discipline to combine system dynamics and optimisation, which can naturally describe the way to manage biological population (Lenhart & Workman, 2007). Stochastic control (Øksendal & Sulem, 2005) is a research area of optimal control to handle noise-driven system dynamics, covering problems in many research areas: economics (Pun, 2018), finance (Cartea et al., 2018), manufacturing (Ouaret et al., 2018), energy management (Lin & Bitar, 2018), resource management (Insley, 2018), and biological population management (Brites & Braumann, 2017; Zhang et al., 2018). The dynamic programming principle is a central principle in the stochastic control, which can reduce solving an optimal control problem to finding a solution to a Hamilton–Jacobi–Bellman (HJB) equation whose form depends on both the system dynamics to be controlled and the performance index to be optimised (Fleming & Soner, 2006; Øksendal & Sulem, 2005; Pham, 2009). In the context of optimal control of stochastic differential equations (SDEs) (Øksendal & Sulem, 2005), the HJB equation becomes a degenerate elliptic (integro-) differential equation. In most cases, HJB equations do not have solutions defined in the classical sense, but only admit solutions having weaker regularities called viscosity solutions (Crandall et al., 1992). Therefore, analysis of a stochastic control problem can be replaced by analysis of viscosity solutions to a HJB equation. Both mathematical (Belak et al., 2015; Chaudhari et al., 2018; Yuan et al., 2018) and numerical (Neilan et al., 2017) approaches have been employed for analysing the viscosity solutions.
Controllability of neutral stochastic evolution equations driven by fBm with Hurst parameter less than 1/2
Published in International Journal of Systems Science, 2019
Zhi Li, Yuan Yuan Jing, Liping Xu
Theory of controllability originates from the famous work Kalman (1960) done by Kalman. Many results on various type of deterministic differential equations have been obtained. For more details on this topic, one can refer to Bashirov and Mahmudov (1999), Chang (2007), Debbouche and Torres (2013), and Li, Wang, and Zhang (2006). Stochastic control theory is a stochastic generalisation of classical control theory. In the past decade, the controllability of different kind of stochastic systems driven by Brownian motion was well investigated, we refer to Mahmudov (2001, 2002), Subalakshmi and Balachandran (2009), Sakthivel, Mahmudov, and Lee (2009) and references therein.