Explore chapters and articles related to this topic
Numerical Methods and Computational Tools
Published in Raj P. Chhabra, CRC Handbook of Thermal Engineering Second Edition, 2017
Atul Sharma, Salil S. Kulkarni, K. Hrisheekesh, Amit Agrawal, Shyamprasad Karagadde, Amitabh Bhattacharya, Rajneesh Bhardwaj
As mentioned in Section 5.4.6.1, one can use either a direct solver or an iterative solver to solve the set of linear equations arising in the FEM. The choice of the solver is problem dependent. Iterative solvers require less memory compared to direct solvers as there is no fill-in. They can be considerably faster compared to direct solvers for large problems. On the other hand, direct solvers are more robust and are suitable for “difficult” problems. As mentioned before, the performance of an iterative solver is heavily dependent on the choice of the preconditioners. For some problems, it may be computationally more expensive to find a preconditioner compared to using a direct solver. In addition, the direct methods are more effective when a set of equations is to be solved for multiple right-hand side vectors. In such case, the matrix factorization needs to be calculated only once.
Numerical Solution of Systems of Nonlinear Equations
Published in Azmy S. Ackleh, Edward James Allen, Ralph Baker Kearfott, Padmanabhan Seshaiyer, Classical and Modern Numerical Analysis, 2009
Azmy S. Ackleh, Edward James Allen, Ralph Baker Kearfott, Padmanabhan Seshaiyer
For general matrices, iterative methods for solving linear systems of equations need to be preconditioned first. However, certain applications occurring in practice do not require preconditioning. For example, the linear systems arising from discretization of the heat equation do not require preconditioning for the Gauss-Seidel (and Jacobi and SOR) methods to converge. If “mild” nonlinearities (that is, linearities that are not large in relation to the other terms in the equations) are introduced into such systems, we obtain nonlinear systems for which the Gauss-Seidel-Newton method will converge. Convergence, even local convergence, cannot be expected for general nonlinear systems when the Jacobi-Newton method, Gauss-Seidel-Newton method, or SOR-Newton method is used.
Iterative Methods for Solving Linear Systems
Published in Victor S. Ryaben’kii, Semyon V. Tsynkov, A Theoretical Introduction to Numerical Analysis, 2006
Victor S. Ryaben’kii, Semyon V. Tsynkov
Overall, the task of designing an efficient preconditioner is highly problem-dependent. One general approach is based on availability of some a priori knowledge of where the matrix A originates from. A typical example here is the aforementioned spectrally equivalent elliptic preconditioners. Another approach is purely algebraic and only uses the information contained in the structure of a given matrix A.Ex-amples include incomplete factorizations (LU, Cholesky, modified unperturbed and perturbed incomplete LU), polynomial preconditioners (e.g., truncated Neumann series), and various ordering strategies, foremost the multilevel recursive orderings that are conceptually close to the idea of multigrid (Section 6.4). For further detail, we refer the reader to specialized monographs [Axe94, Saa03, vdV03].
Aggregation of clans to speed-up solving linear systems on parallel architectures
Published in International Journal of Parallel, Emergent and Distributed Systems, 2022
Dmitry A. Zaitsev, Tatiana R. Shmeleva, Piotr Luszczek
Distributed algorithms [30] are considered a specific direction in solving big linear systems by iterative methods [31] when a few equations of the system are assigned to compute nodes. A special communication matrix and a set of variables are inserted into the iterative scheme to provide communication of nodes in the process of solving a system [21]. Distributed algorithms are also combined with machine learning approaches using neural networks [32]. Preconditioners are widely applied in iteration methods to refine the system matrix characteristics such as the eigen spectrum and they impose a form of matrix partitioning as well. Some of the recent approaches to composing new distributed algorithms based on the domain decomposition involve Petri nets [33] where clans have grown from [4]. Note that a Petri net, as a bipartite graph, allows rather simple representation of hypergraphs traditionally applied to the matrix partitioning [18].
Evaluating the performance of an Inexact Newton method with a preconditioner for dynamic building system simulation
Published in Journal of Building Performance Simulation, 2022
Zhelun Chen, Jin Wen, Anthony J. Kearsley, Amanda Pertzborn
We investigate the application of a NK method from the solver package NITSOL (Pernice and Walker 1998) that solves Equation (1) by solving Equation (2) iteratively using a Krylov subspace method. This method is an Inexact Newton method with Backtracking (INB) algorithm, where linear systems are solved using Generalized Minimal RESidual (GMRES) (Saad and Schultz 1986), a Krylov subspace method. The implementation allows for the application of a preconditioner to accelerate and improve convergence of the iterative method. Details about the INB framework, the GMRES algorithm, and the preconditioning technique are summarized in the following subsections.
An interior-point method-based solver for simulation of aircraft parts riveting
Published in Engineering Optimization, 2018
Maria Stefanova, Sergey Yakunin, Margarita Petukhova, Sergey Lupuleac, Michael Kokkolaras
Table 1 reports the number of conjugate gradient method iterations. Usage of the preconditioner decreases the number of iterations needed for solving the linear system to a few instead of hundreds or thousands. This example shows that is an appropriate and effective preconditioner, and that its usage can decrease the number of computational operations dramatically.