Explore chapters and articles related to this topic
Iterative Methods
Published in Jeffery J. Leader, Numerical Analysis and Scientific Computation, 2022
Fortunately, it is common for large matrices occurring in practice to be sparse. For large, sparse matrices there are a number of iterative methods that–in combination with a smart storage system for the sparse matrix–can lead to considerably shorter computation times. As a rule, direct methods are O(n3) and iterative methods are, in the cases in which they are appropriate, O(n2). The goal for sparse matrices is always to get a method that is O(N) where N is the number of nonzero entries in a typical row of the matrix.
Tools For Analysis
Published in James J Y Hsu, Nanocomputing, 2017
Besides the usual mathematical functions, MATLAB provides many utility functions such as the easy 3d plot (EZSURF, EZPLOT3), eigenvalue solver (EIG, EIGS), Fast Fourier Transform (FFT), Delaunay triangulation (DELAUNAY). There are builtin, specialized mathematical functions such as Bessel functions, BESSEL(H,I,K,J,Y); elliptic functions, ELLIP(J,KE); error functions (ERF), cosine integral function (COSINT) and sine integral function (SININT). A quiver plot (QUIVER) that displays velocity vectors as arrows with components at specified coordinates is also very useful. It is always advantageous to apply the sparse (SPARSE) function to generate matrixes, which can be stored in the compressed format. A full matrix can be made into a sparse matrix for storage saving and for speedy performance. The next example demonstrates the use of an eigenvalue solver for a sparse matrix. It attempts to prove the theorem that the eigenvalues for the NxN matrix with diagonal, upper and lower diagonal terms at separate but constant values are governed by
Linear Programming
Published in Michael W. Carter, Camille C. Price, Ghaith Rabadi, Operations Research, 2018
Michael W. Carter, Camille C. Price, Ghaith Rabadi
Not only do non-zero elements of A often appear in patterns, but more generally, we find the matrix A to be very sparse. A sparse matrix is one with a very large proportion of zero elements. A rule of thumb is that large linear programming models typically have only about 10% non-zero elements; some practitioners claim that 1%–5% is a more realistic range. This sparsity is not a surprising phenomenon when we consider that in any large organization, certain sets of products, people, or processes tend to operate in groups, and are therefore subject to local constraints. When such a problem is formulated, a sparse matrix results because each variable is involved in a relatively small number of the constraints.
An adaptive approach for compression format based on bagging algorithm
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
Cui Huanyu, Han Qilong, Wang Nianbin, Wang Ye
Sparse matrix is a basic matrix type commonly used to solve linear equations in numerical simulation and engineering applications, such as chemical processes [1], heat conduction [2], circuit simulation [3] and Computational Fluid Dynamics (CFD) [4]. Among them, sparse matrix has a large number of zero elements, which will lead to calculation redundancy and storage redundancy in the process of matrix operation, which will reduce the iterative efficiency of numerical simulation and increase the memory pressure. The sparse matrix compression format can effectively reduce the filling of zero elements, to improve the continuity of data distribution, and improve the parallel computing efficiency of SPMV and other related matrix operations on the GPU platform. In addition, the sparse matrix produced in numerical simulation and other fields has a variety of sparse characteristics, so it will also lead to the diversity and complexity of matrix types. How to effectively improve the efficiency of SPMV parallel computing is one of the hot and difficult problems in the field of high performance computing.
Patient-specific fluid–structure interaction model of bile flow: comparison between 1-way and 2-way algorithms
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2021
Alex G. Kuchumov, Vasily Vedeneev, Vladimir Samartsev, Aleksandr Khairulin, Oleg Ivanov
For the FSI, the two solvers, ANSYS Mechanical and ANSYS CFX, were coupled and solved iteratively. In the one-way simulations, the full Newton method with 2400 time iterations using a time step of 1 second was used within the fluid solver to capture the gallbladder emptying time. A sparse matrix solver based on Gaussian elimination was used for solving the system. The fluid field is solved until the convergence criteria were reached. The calculated forces at the structure boundaries were then transferred to the structure side. The structure side was calculated until the convergence criterion was reached: a relative change of 10−4 in the norm of all field variables. The solution was finished when the maximum number of time steps was reached.
Introduction of the Adding and Doubling Method for Solving Bateman Equations for Nuclear Fuel Depletion
Published in Nuclear Science and Engineering, 2023
Olin W. Calvin, Barry D. Ganapol, R. A. Borrelli
For performing matrix-matrix and matrix-vector multiplications, we used sparse matrix multiplication routines implemented in Griffin.8 For performing any matrix inversions, which are needed for ADM, we used the DGETRF and DGETRI routines from LAPACK (Ref. 13). We note that the LAPACK routines are dense matrix operations, meaning that there is the potential for using an optimized sparse matrix inversion operation to reduce the cost of the matrix inversions required by ADM. However, for all practical uses of ADM, we anticipated that the matrix-matrix multiplication operations will be the more expensive part of the ADM solve.