Explore chapters and articles related to this topic
Numerical and Computational Issues in Linear Control and System Theory
Published in William S. Levine, Control System Fundamentals, 2019
A.J. Laub, R.V. Patel, P.M. Van Dooren
The most common algorithm now used to solve Equation 21.21 for general A is the QR algorithm of Francis [13]. A shifting procedure enhances convergence and the usual implementation is called the double-Francis-QR algorithm. Before the QR process is applied, A is initially reduced to upper Hessenberg form AH(aij= 0 if i — j ≥ 2). This is accomplished by a finite sequence of similarities of the Householder form discussed above. The QR process then yields a sequence of matrices orthogonally similar to A and converging (in some sense) to a so-called quasi upper triangular matrix S also called the real Schur form (RSF) of A. The matrix S is block upper triangular with lxl diagonal blocks corresponding to real eigenvalues of A and 2x2 diagonal blocks corresponding to complex-conjugate pairs of eigenvalues. The quasi upper triangular form permits all arithmetic to be real rather than complex as would be necessary for convergence to an upper triangular matrix. The orthogonal transformations from both the Hessenberg reduction and the QR process may be accumulated in a single orthogonal transformation U so that () UTAU=R
Matrix Eigenvalue Problem
Published in Santanu Saha Ray, Numerical Analysis with Algorithms and Programming, 2018
The QR algorithm is generally applied with a shift of origin for the eigenvalues in order to accelerate the speed of convergence. A constant σ is selected close to an eigenvalue of A. Some modification of QR method modifies the factorization similar to QR method by choosing Qi and Ri such that A(i)−σI=Q(i)R(i)
Numerical Analysis
Published in Erchin Serpedin, Thomas Chen, Dinesh Rajan, Mathematical Foundations for SIGNAL PROCESSING, COMMUNICATIONS, AND NETWORKING, 2012
The methods discussed so far compute a single eigenvalue and eigenvector pair. The method of orthogonal iterations is a modification of the power method that can be used to compute several eigenvalues. Consider a set of p orthonormal vectors qi,i = 1 …,p that are simultaneously used as starting vectors for the power method. It is convenient to define the n × p orthonormal matrix Q̃0 = [q1, q2,…, qp] whose columns are the starting vectors. Multiplying Q̃0 with A repeatedly would cause each column to converge to v1. Instead, we orthonormalize the columns of the matrix obtained after each multiplication using the reduced QR algorithm. The first vector converges to v1 as in the power method. The second vector, which is forced to be orthogonal to the first one at each step, converges to v2. Orthonormalization at each step ensures that the ith column of the matrix converges to Vi, i = 1,…, p.
Monte Carlo Solution of k-Eigenvalue Problem Using Subspace Iteration Method
Published in Nuclear Science and Engineering, 2020
The orthonormalization procedure in step 2b is computationally expensive due to factorization; hence, it may be useful to perform several applications of A (step 2a) before the orthonormalization procedure. It has been demonstrated16 that Xk will essentially converge in direction to the Schur vectors associated with the m dominant eigenvalues of A and the convergence will be governed by the i’th dominance ratio where λi, i = 1, …, m are the m dominant eigenvalues of A with |λi| > | λi+1|. The Schur decomposition of A is defined as follows. An n × n square matrix A can be expressed as A = QUQ*, where Q is a unitary matrix (hence, Q−1 = Q*, the conjugate transpose of Q) and U is an upper triangular matrix referred to as the Schur form. The Schur decomposition of a matrix is computed using the QR algorithm or its variants. The diagonal entries of U are exactly the eigenvalues of A.