Explore chapters and articles related to this topic
Application of Eigenvalues and Eigenvectors
Published in Timothy Bower, ®, 2023
Eigenvalue / Eigenvector problems are one of the more important linear algebra topics. Eigenvalues and eigenvectors are used to solve systems of differential equations, but more generally they are used for data analysis, where the matrix A represents data rather than coefficients of a system of equations. They introduce a simple, yet very powerful relationship between a matrix and a set of special vectors and scalar values. This simple relationship provides elegant solutions to some otherwise difficult problems.
Dimension Reduction Breaking the Curse of Dimensionality
Published in Chong Ho Alex Yu, Data Mining and Exploration, 2022
In the context of statistical analysis, vectors help us to understand the relationships among variables. An eigenvalue has a numeric property, while an eigenvector has a directional property. These properties together define the characteristics of a variable or a component. Eigenvalues and eigenvectors are mathematical objects in which inputs are largely unaffected by a mathematical transformation. Again, take vector-based graphics as an example. Whenever the image is rescaled, the algorithm recreates the vectors, and thus even if the image is enlarged 2,000%, it remains crystal sharp.
Linear Algebra
Published in Niket S. Kaisare, Computational Techniques for Process Simulation and Analysis Using MATLAB®, 2017
Eigenvalues and eigenvectors were introduced for solving linear ODEs, y′ = Ay. The term is reported to have origins in the German word “eigenwert,” which translates (according to Google translate) to “intrinsic.” The term eigenvalue or eigenvector may therefore imply “an intrinsic value/vector” that characterizes the matrix. The most important application of eigenvalues and eigenvectors is in the analysis of situations where we need to map a linear transformation of a vector space on itself.
Random field modelling of spatial variability in concrete – a review
Published in Structure and Infrastructure Engineering, 2023
Wouter Botte, Eline Vereecken, Robby Caspeele
A first series expansion method is the Karhunen-Loève (KL) expansion, which starts from the series expansion of the covariance matrix, based on its spectral representation. After determining the eigenvalues and eigenvectors of the covariance matrix, the random field can be represented by a series expansion according to: where, are mutually uncorrelated random variables with mean 0 and standard deviation 1, with a Gaussian distribution if the random field is also Gaussian. Information on numerical methods for the discretization of random fields by means of the Karhunen-Loève expansion can be found in (Betz, Papaioannou, & Straub, 2014). Due to the covariance eigenfunction basis, the mean square error resulting from truncation is minimized with respect to a complete basis. The KL expansion of Gaussian fields is also almost surely convergent. The KL expansion always under-represents the true variance of the field.
A case study of in-service teachers’ errors and misconceptions in linear combinations
Published in International Journal of Mathematical Education in Science and Technology, 2022
Lillias Hamufari Natsai Mutambara, Sarah Bansilal
This qualitative research comprised a case study of 73 in-service teachers studying for a Bachelor of Science Education Honours Degree in Mathematics. The students were practicing teachers who have diplomas in education from the various teachers colleges in Zimbabwe and were required to upgrade their qualification to include a degree. The teachers’ studies were compressed into a short time period of 3 years, enabling them to complete the degree in the same time taken by the usual young adult full-time student. The teachers attended lectures during the school holidays, while they continued working full-time as teachers. The teachers thus studied under difficult conditions, as explained in more detail in Kazunga (2018). The participants had already completed a first course in linear algebra which introduced the concepts of vectors, matrix manipulation, solutions of systems of linear equations as well as complex numbers. This study was set within the second course in Linear Algebra which covered the concepts of vector space concepts, linear transformations, inner product space, orthonormal basis, diagonalization of matrices, Eigenvalues and Eigenvectors.
A high fidelity cost efficient tensorial method based on combined POD-HOSVD reduced order model of flow field
Published in European Journal of Computational Mechanics, 2018
Mohammad Kazem Moayyedi, Milad Najaf beygi
To construct a reduced order model based on tensorial data, a similar manner with POD method has been used. HOSVD can decompose a tensor to the symmetric matrices (Equation (11)). Then, the POD approach is applied to compute the reduced order model of the field by eigenvalues and eigenvectors of these symmetric matrices (Equation (12)). The mentioned approach is creating a reduced order model, which is the combination of POD and HOSVD methods. The required number of modes to reconstruct an approximate model of the field is determined by the parameter . This number can be chosen by considering the relative energy of each mode. Consequently, the following expression is the reduced order model of tensor :