Explore chapters and articles related to this topic
Hybrid Methods for Localization
Published in Prabhakar S. Naidu, Distributed Sensor Arrays Localization, 2017
For a broadband signal, we work in the frequency domain. Each received signal is Fourier transformed. We construct a column vector of large magnitude Fourier coefficients. Next, we place a transmitter at a test location and compute the theoretical column vector. For this we need to use prior knowledge of the channel model. We compute the outer product of two column vectors, which results in a matrix. At the correct location of the transmitter, this matrix will have all its diagonal terms real and the off-diagonal terms will be complex. This property has been exploited for localization; it is, however, sensitive to noise.
Multiphase Turbulence Modeling Using Sparse Regression and Gene Expression Programming
Published in Nuclear Technology, 2023
where is a principal invariant, defined as , and the basis tensor is defined by the normalized slip tensor. This slip tensor is given as the outer product of the mean slip velocity, . The two other basis tensors and are the anisotropic stress tensors associated with the fluid and particle phases, respectively. In terms of solution variables, the mean phase velocities are solved by associated momentum equations, and the Reynolds stresses are informed by transport equations in the multiphase RANS equations (see Ref. 29).This model has an error of 0.012, where the error is defined as
Symmetric rank-1 approximation of symmetric high-order tensors
Published in Optimization Methods and Software, 2020
The target of symmetric rank-1 tensor approximation problem is to decompose a given symmetric tensor into the high-order outer product of a vector or when this is impossible to find the closest tensor which has such a rank-1 outer product decomposition. This approximation problem not only plays important role theoretically in areas such as independent component analysis [5], but also has wide applications in signal and image processing, blind source separation, statistics, investment science [4,15,19]. Moreover, De Lathauwer et al. [7] and Kolda et al. [14] have shown that, if Z-eigenvalue is used among the several possible ways to define an eigenvalue of a tensor, the best approximation of the given tensor corresponds to the Z-eigenpair with the largest absolute eigenvalue. Hence, theoretically these two important problems, i.e. for a given symmetric tensor, computing the rank-one approximation and computing the Z-eigenpairs are closely related and equivalent to a certain degree.
Tensor Mixed Effects Model With Application to Nanomanufacturing Inspection
Published in Technometrics, 2020
Xiaowei Yue, Jin Gyu Park, Zhiyong Liang, Jianjun Shi
Matricization, also known as unfolding or flattening, is the process of reordering the elements of a tensor into a matrix (Kolda and Bader 2009). The k-mode matricization of a tensor is denoted by . is the vectorization of a tensor . The k-mode product of a tensor with a matrix is denoted by and elementwise, we have , where all the indices range from 1 to their capital versions, for example, the index j goes from , and the index ik goes from . The Kronecker product of matrices and are denoted by . The Kronecker product is an operation on two matrices resulting in a block matrix and it is a generalization of the outer product.