Explore chapters and articles related to this topic
Calculating
Published in Victor A. Bloomfield, Using R for Numerical Analysis in Science and Engineering, 2018
The functions solve and eigen are central to numerical analysis of linear algebraic systems. These and related matrix functions are handled in R via the standard LA-PACK code. According to its website, http://www.netlib.org/lapack/, LAPACK is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision. The inverse of a square matrix is computed using the solve function.
Lapack
Published in Leslie Hogben, Richard Brualdi, Anne Greenbaum, Roy Mathias, Handbook of Linear Algebra, 2006
Zhaojun Bai, James Demmel, Jack Dongarra, Julien Langou, Jenny Wang
There have been a number of extensions of LAPACK. LAPACK95 is a Fortran 95 interface to the Fortran 77 LAPACK [LAP95]. CLAPACK and JLAPACK libraries are built using the Fortran to C (f2c) and Fortran to Java (f2j) conversion utilities, respectively [CLA], [JLA]. LAPACK++ is implemented in C++ and includes a subset of the features in LAPACK with emphasis on solving linear systems with nonsymmetric matrices, symmetric positive definite systems, and linear least squares systems [LA+]. ScaLAPACK is a portable implementation of some of the core routines in LAPACK for parallel distributed computing [Sca]. ScaLAPACK is designed for distributed memory machines with very powerful homogeneous sequential processors and with homogeneous network interconnections.
Linear Systems
Published in Jeffery J. Leader, Numerical Analysis and Scientific Computation, 2022
The MATLAB package was designed for problems of this sort. It was originally intended to be an interactive interface to certain Fortran computational matrix algebra subroutines from the LINPACK and EISPACK subroutine libraries, later re-written in C. Starting with version 6, the newer LAPACK routines have replaced their older LINPACK and EISPACK counterparts. (See www.netlib.org/lapack for details on LAPACK.) The LAPACK routines use newer numerical algorithms in some cases (though generally they are based on the same ideas as before). Just as importantly, they utilize the Basic Linear Algebra Subroutines (BLAS). These machine-specific subroutines take advantage of the particulars of the computer's architecture, especially its memory structure, to greatly increase the speed of various matrix manipulations. The BLAS are divided into three classes: Level 1 (vector-vector operations), Level 2 (matrix-vector operations), and Level 3 (matrix-matrix operations). Higher levels generally give better speed, and are superior to the traditional element-by-element manipulations. (Of course, the BLAS themselves must use element-by-element operations on a standard computer; speedups in this case are achieved by arranging the computations for most efficient processing, including the details of retrieval of data from memory and the attempt to maximize the number of computations performed per retrieval of data from memory. On a vector machine the effects will be even more noticeable.) The LAPACK routines, originally designed for use on supercomputers, use (sub-)blocks of matrices to perform much faster. The key change in computer architecture that makes these changes worthwhile has been the widespread use of caches, that is, fast memory near the CPU. Cache management is an important practical issue that we will not be able to address in any detail because of its hardware-specific nature; by carefully using the BLAS and relying on the computer vendor to supply appropriate implementations optimized for their particular machine, the methods can be (and have been) made portable.
An improved stiff-ODE solving framework for reacting flow simulations with detailed chemistry in OpenFOAM
Published in Combustion Theory and Modelling, 2023
Kun Wu, Yuting Jiang, Zhijie Huo, Di Cheng, Xuejun Fan
Generally, the implicit integration algorithms form non-linear systems whose roots are found iteratively by variant Newton methods hence systems of linear algebraic equations have to be solved. Nevertheless, OpenFOAM’s native stiff ODE solvers employ basic method of LU decomposition, which is generally expensive because the computational cost scales cubically with respect to the size of the system [19]. To expedite the matrix operation, Morev et al. [20] made use of the standard linear algebra library (LAPACK) and gained an order of magnitude calculation speedup for small-scale chemistries. However, with dense matrix representation, their implementation is more suitable for solving small matrix () and its performance may deteriorate for large and sparse matrix associated with detailed chemistries. On the contrary, for large-scale chemistry systems, sparse representation was found to be more efficient than the dense one, whose computational cost could achieve linear dependence on the size of the system [21]. Unfortunately, Morev et al.’s observations were made based on two relatively small-scale skeletal mechanisms comprising 21 and 54 species respectively, whereby the high-performance linear system solvers leveraging sparse matrix algebra, are still lacking.
Fast accurate seakeeping predictions
Published in Ship Technology Research, 2020
The boundary conditions for each panel form a linear equation system having the same coefficient matrix for all eight potentials, however with eight different inhomogeneous vectors. The inhomogeneous vector for can only be determined when the source strengths of are known. Therefore, at first an LU factorization of the coefficient matrix is made; then the source strengths for and to are determined; the allow to compute the inhomogeneous vector for ; and finally the source strengths of are determined. This is done using subroutines of the Linear Algebra Package (LAPACK) which proved to be very efficient. Iterative solvers are not applicable here both because of the bad condition number of the system, and because one must solve for eight different inhomogeneous vectors.
Enhancing parallelism of distributed algorithms with the actor model and a smart data movement technique
Published in International Journal of Parallel, Emergent and Distributed Systems, 2021
Anatoliy Doroshenko, Eugene Tulika, Olena Yatsenko
There are many efficient packages implementing matrix operations for shared memory systems. LAPACK [12] provides the tools for linear algebra. ScaLAPACK [13] is a subset of LAPACK routines, rewritten for distributed memory and message-passing systems. It is implemented using Single-Program-Multiple-Data approach with explicit message passing between processors. ScaLAPACK is designed for heterogeneous systems and works on any computer supporting MPI. Besides, many providers offer libraries optimised for their hardware, such as Intel MKL, AMD ACML, IBM ESSL (PESSL), and Cray XT LibSci. Each of these specialised libraries includes LAPACK and ScaLAPACK routines.