Explore chapters and articles related to this topic
Laplace Transforms
Published in Steven G. Krantz, Differential Equations, 2020
The concept of a transform is that it turns a given function into another function. We are already acquainted with several transforms: The derivative D takes a differentiable function f (defined on some interval (a, b)) and assigns to it a new function Df = f′.The integral I takes a continuous function f (defined on some interval [a, b] and assigns to it a new function If(x)=∫axf(t)dt.The multiplication operator Mφ, which multiplies any given function f on the interval [a, b] by a fixed function φ on [a, b], is a transform: Mφf(x)=φ(x)⋅f(x).
Distributed optimal control for continuous-time nonaffine nonlinear interconnected systems
Published in International Journal of Control, 2022
Behzad Farzanegan, Amir Abolfazl Suratgar, Mohammad Bagher Menhaj, Mohsen Zamani
We suppose that the PE condition is established. Therefore, the approximation values for and can be obtained from (52). We substitute the associated values from solving (52) into (46) and (47), then invoke (34) and (35) to provide an update rule for as where . Choosing a convex twice differentiable function for guarantees that a unique continuous function exists such that It is also worth noticing that where and a continuous function. Thus, it is shown that the problem (54) is solvable. However, it is not computationally efficient to solve this optimisation problem at every iteration. Thus, we appeal to (50) for calculating . The sequences obtain from (50) and (54) converge to each other for some large value of .
Inexact basic tensor methods for some classes of convex optimization problems
Published in Optimization Methods and Software, 2022
Notation and generalities. In what follows, we denote by a finite-dimensional real vector space, and by its dual space, composed by all linear functions on . The value of the linear function at point is denoted by . The most important example of linear function is the gradient of the differentiable function at point . The Hessian can be seen as a self-adjoint linear operator from to .
An inertial method for solving generalized split feasibility problems over the solution set of monotone variational inclusions
Published in Optimization, 2022
C. Izuchukwu, G. N. Ogwo, O. T. Mewomo
Let be a convex and continuously differentiable function, and be a convex and lower semi-continuous function. Consider the following class of Split Linear Inverse Problem (SLIP) (see [48] for the case of inverse linear problem in finite dimensional space of real numbers): where is a bounded linear operator and is any nonlinear mapping. Let the solution set of problem (55) be Υ. We know that if is convex and continuously differentiable, then the gradient of is monotone and continuous (see [48]). Also, if G is convex and lower semi-continuous, then the subdifferential of M is maximal monotone (see [49]). Moreover, Thus, setting and in Algorithm 3.3, the algorithm reduces to the following method for solving the SLIP (55).