Explore chapters and articles related to this topic
Digital Systems
Published in Wai-Kai Chen, Analog and VLSI Circuits, 2018
Festus Gail Gray, Wayne D. Grover, Josephine C. Chang, Bing J. Sheu, Roland Priemer, Rung Yao, Flavio Lorenzelli
be an “orthogonal Givens rotation matrix,” where c = cos θ and s = sin θ. Pre-or postmultiplication of A by G leaves A unchanged, except for rows (columns) i and j, which are replaced by a linear combination of old rows (columns) i and j. A “Jacobi rotation” is obtained by simultaneous pre- and postmultiplication of a matrix by a Givens rotation matrix, as given by G(i, j, 0)TAG(i,j, θ), where θ is usually chosen in order to zero out the (i,j) and (j, i) entries of A.
Extracting Low-Frequency Spatio-Temporal Patterns in Ambient Power System Data Using Blind Source Separation
Published in Electric Power Components and Systems, 2018
José de Jesús Nuño Ayón, Julián Sotelo Castañon, Carlos Alberto López de Alba
The method exploits the eigenstructure of the whitened ensemble matrix by taking time-lagged covariance matrices , τ = 1, 2, …, K. These robust covariance matrices are approximately diagonalized using a procedure called Joint Approximate Diagonalization (JAD) [25]. The objective of this procedure is to find the orthogonal matrix which satisfies where the matrices , τ = 1, 2, …, K, are as diagonal as possible because an exact diagonalization may not be possible. According to the JAD procedure, an optimization problem is carried out with respect to a matrix that minimizes the sum of squares of all off-diagonal terms of for K time-lagged covariance matrices, i.e., . The optimization problem is solved using the Jacobi rotation technique which looks for a rotation that diagonalizes multiple matrices via an iterative process. This process ends when the sine of the angle of rotation is smaller than a specified threshold.
Solving generalized inverse eigenvalue problems via L-BFGS-B method
Published in Inverse Problems in Science and Engineering, 2020
Zeynab Dalvand, Masoud Hajarian
The Cholesky factorization is a factorization of a positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is helpful for effective numerical solutions [46]. Actually, positive-definite matrices possess numerous significant properties, specifically they can be represented in the form for a non-singular matrix H. The Cholesky factorization is a special type of this factorization, where H is a lower triangular matrix with positive diagonal elements. By a form of Gaussian elimination, we can compute the Cholesky factorization [46]. Balancing elements in the equation shows These equations can be solved to make a column of the matrix H at a time, according to the following form: There are theorems that show the existence of a Cholesky factorization for symmetric and positive definite matrices. According to one of this theorem if all leading principal minors a matrix are non-singular, then there exist a diagonal matrix D and two unit upper triangular matrices L and M so that and if be a symmetric and non-singular matrix, then, matrices L and M are equal and we can also write Jacobi method constructs a sequence of similar matrices by using the orthogonal transformations. Jacobi methods for the symmetric eigenvalue problem attract current attention because they are inherently parallel [47]. This method uses Jacobi rotation matrices for diagonalization of symmetric matrices, by the following theorem: