Explore chapters and articles related to this topic
Algorithmic Optimization
Published in Abul Hasan Siddiqi, Mohamed Al-Lawati, Messaoud Boulbrachene, Modern Engineering Mathematics, 2017
Abul Hasan Siddiqi, Mohamed Al-Lawati, Messaoud Boulbrachene
The preconditioned conjugate gradient method is often used in the solution of linear systems in which the matrix is sparse and positive definite. These systems arise from the discretization by finite elements methods or finite difference methods of boundary value problems. The larger the system, the more impressive the conjugate gradient method becomes since it significantly reduces the number of iterations required. In these systems, the preconditioning matrix C is approximately equal to L in the Choleski factorization LLt of A. Generally, small entries in A are ignored and Choleski’s method is applied to obtain what is called an incomplete LLt factorization of A. Thus (C−1) C−1 ≈ A−1 and a good approximation is obtained.
Review of Basic Laws and Equations
Published in Pradip Majumdar, Computational Methods for Heat and Mass Transfer, 2005
The basic conjugate gradient (CG) method works poorly for an ill-conditioned system. Convergence speed of the conjugate gradient method can be increased using preconditioning. Preconditioning means that we change variables to obtain a new system A˜x=c whose coefficient matrix A˜ has a much improved eigenvalue distribution or condition number, condition (A˜), than that of the original system Ax=c. The new system is obtained by substituting x=LTx˜ into the original system as ALTx˜=c
Linear Systems
Published in A. C. Faul, A Concise Introduction to Numerical Analysis, 2018
Only in exact arithmetic is the number of iterations at most n. The conjugate gradient method is sensitive to even small perturbations. In practice, most directions will not be conjugate and the exact solution is not reached. An acceptable approximation is however usually reached within a small (compared to the problem size) number of iterations. The speed of convergence is typically linear, but it depends on the condition number of the matrix A. The larger the condition number the slower the convergence. In the following section we analyze this further and show how to improve the conjugate gradient method.
Low-complexity MU-MIMO-GFDM joint receivers
Published in International Journal of Electronics, 2023
Hsuan-Fu Wang, Fang-Biau Ueng, Yun-Hsuan Sung
The second method is the conjugate gradient method to decompose . The conjugate gradient method is to multiply the complex vector by the estimated vector , and is the same complex number vector of length MN in . The conjugate gradient method mainly uses the characteristics of the conjugate to speed up the convergence and uses the gradient to provide an orthogonal basis. The first conjugate vector is obtained by the vector multiplication of the matrix and the inner product operation, and finally, the accurate value is obtained. Because is a positive definite matrix, the conjugate gradient iteration algorithm can be applied to calculate equation (37).
A conjugate directions-type procedure for quadratic multiobjective optimization
Published in Optimization, 2022
Ellen H. Fukuda, L. M. Graña Drummond, Ariane M. Masuda
Recall that for general nonlinear scalar-valued optimization problems the related approach (called conjugate gradient method) does not guarantee convergence with a finite number of iterations. This also happens with its vector optimization versions proposed in [8–10], which deal with the unconstrained minimization problem for nonlinear vector-valued objective functions. In those works, the authors extend conjugate gradient methods with the set of directions constructed iteratively by means of the steepest descent directions proposed in [11]. The steplengths are computed using inexact line search procedures. These methods converge to critical points. Our paper actually clarifies what happens when more general directions are considered in multiobjective quadratic problems with exact line searches for computing the steplength. As we have already mentioned, the scheme has finite termination and furnishes weak Pareto (Pareto) optimal solutions.
Space mapping techniques for the optimal inflow control of transmission lines
Published in Optimization Methods and Software, 2018
A generic extension of the classical gradient method is the preconditioned conjugate gradient method [21,25,29]. Instead of applying we set where is a preconditioning matrix typically approximating the inverse of the Hessian . This method will be called CG in the following. A survey of different formulas for the computation of can be found in [29]. For implementation purposes, we use a step-size satisfying the Wolfe conditions and for both gradient type methods GM and CG. The update of the preconditioning matrix is done by a BFGS update.