Explore chapters and articles related to this topic
Introduction
Published in Thomas C. Weinacht, Brett J. Pearson, Time-Resolved Spectroscopy, 2018
Thomas C. Weinacht, Brett J. Pearson
As the number of degrees of freedom in the system increases, the level of precision with which it is possible to determine the eigenenergies and calculate the dynamics decreases. Eventually, it becomes impossible to rely on a combination of time-independent measurements and calculation to infer the time evolution of the system. For small molecules (2 − 3 atoms) in the gas phase, it is reasonable to calculate the dynamics in great detail, checking the results by comparing calculated eigenenergies to measured spectra. However, when one moves to molecules with ~10 atoms or more, there are many dynamics that cannot be reliably inferred from the combination of time-independent spectroscopy and calculation. Furthermore, when the system of interest is not isolated from its environment (e.g. condensed phase), time-resolved spectroscopy provides dynamical information that is not available in any other way.3
The Step Response: First-Order Lag
Published in Richard J. Jagacinski, John M. Flach, Control Theory for Humans, 2018
Richard J. Jagacinski, John M. Flach
The term dynamic refers to phenomena that produce time–changing patterns, the characteristic pattern at one time being interrelated with those at other times. The term is nearly synonymous with time-evolution or pattern of change. It refers to the unfolding of events in a continuing evolutionary process.
Optimal Control
Published in Jitendra R. Raol, Ramakalyan Ayyagari, Control Systems, 2020
Jitendra R. Raol, Ramakalyan Ayyagari
Optimal control refers to the class of problems in which one needs to minimize (sometimes maximize) a function of state variables and input variables; this function is often called a functional and, in most cases, it is nonlinear in nature. These variables are the state, or phase, variables and the control (input) variables. In general, the time-evolution (or -propagation) of the states is governed by the control input and the dynamics of the underlying system; the governing mechanism is either a set of differential or difference equations. Interestingly, due to some natural physical limits in the dynamics of the real systems of which these equations are the mathematical models, the control as well as the state variables would also have some constraints, for example, the limit on their maximum amplitude (as we have seen in Chapter 5). One important constraint is that while one should and can find an optimal control variable u(t) for a given system, the same input should be such that the original dynamic system’s differential/difference equation should be satisfied with this new optimal signal u(t) as the control input. Another important requirement is that the optimal control input, when applied to a dynamic system, especially if this system is already working in a closed loop, should not render the overall closed loop system unstable. Also, there would be some limit on the amplitude of the optimal control input, due to the fact that large input/s, when applied, can damage the physical system. Many problems in optimal control are non-classical, due to the requirement of constraints on state and control variables. Thus, the problem in/of optimal control is to determine the control signal/s that will make a dynamic system to satisfy the physical constraints and minimize (or maximize) some appropriately chosen performance criterion, which is based on the functional already mentioned in the foregoing [3]. Often, the resultant set of equations would be nonlinear and might require (linearization or) numerical methods to solve these equations to obtain the optimal control and state variables’ trajectories. Theory of optimal control has evolved as an extension of the theory and methods of calculus of variation/s [3]. The development of this chapter is based on [3,4]. In most cases, we consider real vector spaces, and their dimensions and meanings will be clear from the underlying states, outputs, and control variables.
Multiple Harmonic Source Location Using the Least Median of Squares Method with the Presence of Outliers in High Voltage Electric Power Systems
Published in Electric Power Components and Systems, 2019
Luis Alberto Hernández Armenta, David Romero Romero
The multiple point methods can be categorized as dynamic [11–20] and static [21–35] methods. Dynamic methods provide utilities with information on the time evolution of the system. These methods present the ability to track the harmonic content in a certain time. Kalman filter [11–14], Independent component analysis [15,16] and the impedance network [17] and neural networks [19,20] are dynamic methods. The knowledge of system parameters at each harmonic order are required to lead an accurate harmonic load location [2].
Modelling and stability analysis of a longitudinal wheel dynamics control loop with feedback delay
Published in Vehicle System Dynamics, 2022
Adam Horvath, Peter Beda, Denes Takacs
Numerical simulation of the full system can be a used to validate the qualitative behaviour of the system. Now, stability stands in focus of the study, and one can determine easily the behaviour of present system regarding stability with use of time domain signal analysis. If time evolution of state variables presents increasing tendency with or without oscillations in the signals, system is considered unstable, but if perturbation decays and steady state value is set, the system can be considered stable.
Transient wave diffraction around cylinders by a novel boundary element method based on Fourier-Laguerre expansions
Published in Ships and Offshore Structures, 2021
Ruipeng Li, Xiaobo Chen, Wenyang Duan
The Fourier-Laguerre expansion method proposed above is applied to study the wave diffraction around a cylinder in the time domain. Time evolution due to any kind of disturbance from an initial state, which may be stationary for example, to the steady state can be simulated and analysed in the time domain.