Explore chapters and articles related to this topic
Numerical Methods for Eigenvalue Problems
Published in Federico Milano, Ioannis Dassios, Muyang Liu, Georgios Tzounas, Eigenvalue Problems in Power Systems, 2020
Federico Milano, Ioannis Dassios, Muyang Liu, Georgios Tzounas
A coarse taxonomy of algorithms for the solution of non-symmetric eigenvalue problems is as follows: Vector iteration methods, which, in turn, are separated to single and simultaneous vector iteration methods. Single vector iteration methods include the power method and its variants, such as the Rayleigh quotient iteration. Simultaneous vector iteration methods include the subspace iteration method and its variants, such as the inverse subspace method.Schur decomposition methods, which mainly include the QR algorithm, the QZ algorithm and their variants, such as the QR algorithm with shifts.Krylov subspace methods, which basically include the Arnoldi iteration and its variants, such as the implicitly restarted Arnoldi iteration and the Krylov-Schur method. In this category belong also preconditioned extensions of the Lanczos algorithm, such as the non-symmetric versions of the Generalized Davidson and Jacobi-Davidson methods.Contour integration methods, which basically include a moment-based Hankel method and a Rayleigh-Ritz-based projection method proposed by Sakurai and Sugiura; and the FEAST algorithm.
Simulation of Analog and RF Circuits and Systems
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC Implementation, Circuit Design, and Process Technology, 2018
Jaijeet Roychowdhury, Alan Mantooth
An important issue with preconditioned iterative linear solvers, especially those based on Krylov subspace methods, is that they require a good preconditioner to converge reliably in a few iterations. The convergence of the iterative linear method is accelerated by applying a preconditioner Js˜, replacing the original system Jc = d with the preconditioned system J˜−1J˜c=J˜−1d, which has the same solution. For robust and efficient convergence, the preconditioner matrix J should be in some sense a good approximation of J, and also “easy” to invert, usually with a direct method such as LU factorization. Finding good preconditioners that work well for a wide variety of circuits is a challenging task, especially when the nonlinearities become strong.
Krylov Subspace Methods for Numerically Solving Partial Differential Equations
Published in Mangey Ram, Recent Advances in Mathematics for Engineering, 2020
Santosh Dubey, Sanjeev K. Singh, Anter El-Azab
Krylov subspace is a suitable space to look for the approximate solution of the system of linear equations because it has a natural representation as a member of Krylov subspace. In fact, it is possible to find the solution in very less number of iterations if the dimension of this space is small.
Development and Optimisation of a DNS Solver Using Open-source Library for High-performance Computing
Published in International Journal of Computational Fluid Dynamics, 2021
Hamid Hassan Khan, Syed Fahad Anwer, Nadeem Hasan, Sanjeev Sanghi
The pressure Poisson equation solution is expensive on account of its elliptical nature. Therefore, efficient iterative solver like Generalised Minimal Residual (GMRES) method proposed by Saad and Schultz (1986) is employed in present work. This iterative method is a preconditioned Krylov solver often applied to a large set of the linear equations. GMRES is based on the projection method such that it extracts or projects the approximate solution of Ax = b onto the Krylov subspace K. The vector in Krylov subspace K approximates the solution of Ax = b with minimal residual. The sequence of the orthogonal vector is used in GMRES to get the solution of the non-symmetric linear equation.
Exponential Time Differencing Schemes for Fuel Depletion and Transport in Molten Salt Reactors: Theory and Implementation
Published in Nuclear Science and Engineering, 2022
Zack Taylor, Benjamin S. Collins, G. Ivan Maldonado
Krylov subspace approximations are a class of popular methods used in sparse matrix algorithms. The idea of Krylov subspace methods is to project the sparse matrix into a lower-dimensional subspace. The new lower-dimension projection is of size , where . Because the matrix is of lower dimension, calculating its matrix exponential is much faster. It is important to note that Krylov subspace methods can be used only as an operation on a vector: Direct calculation of the matrix exponential is not possible.33
GMRES based numerical simulation and parallel implementation of multicomponent multiphase flow in porous media
Published in Cogent Engineering, 2020
Saltanbek T. Mukhambetzhanov, Danil V. Lebedev, Nurislam M. Kassymbek, Timur S. Imankulov, Bazargul Matkerim, Darkhan Zh. Akhmed-Zaki
There are several well-known methods for solving systems of algebraic equations of this type; direct methods, such as the Gauss method or LU decomposition, are not suitable for systems with sparse matrices since there is the possibility of overflow (Higham, 2011). The Conjugate gradient method (CG) is designed for systems with symmetric matrices, and the Biconjugate Gradient Method (BiCG) has slow convergence (Van der Vorst, 2003). In this work, system (8) was solved by the iterative method GMRES, which is a widely used Krylov subspace method (Saad, 2003).