Explore chapters and articles related to this topic
Tensor Methods for Clinical Informatics
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Cristian Minoccheri, Reza Soroushmehr, Jonathan Gryak, Kayvan Najarian
The entries of the core represent the level of interaction among different components. Note that Tucker is a sum of rank-one tensors like CPD, however, it captures the interaction of a whole factor matrix with every other factor matrix. Typically, one requires extra conditions, like the factor matrices being orthogonal. Note that a Tucker decomposition is not unique as the factor matrices are not rotation invariant. However, the column space defined by each factor matrix is unique. A CP decomposition can be thought of as a special case of a Tucker decomposition (if we allow the factor matrices to be not orthogonal), with a superdiagonal core tensor. However, due to the lack of uniqueness, Tucker decompositions have very different applications. For example, Tucker decompositions can be thought of as higher-order Principal Component Analysis (PCA): by choosing a core of a small dimension we can think of it as a compressed version of the original tensor. While CP decompositions are often unique, easier to interpret, and can yield better compression for low-rank tensors, Tucker decompositions generally yield better compression for tensors of higher rank.
Matrix and Tensor Signal Modelling in Cyber Physical Systems
Published in Panagiotis Tsakalides, Athanasia Panousopoulou, Grigorios Tsagkatakis, Luis Montestruque, Smart Water Grids, 2018
Grigorios Tsagkatakis, Konstantina Fotiadou, Michalis Giannopoulos, Anastasia Aidini, Athanasia Panousopoulou, Panagiotis Tsakalides
The Tucker decomposition is named for Ledyard R. Tucker [46] and may be considered as the extension of the classical SVD method to higher dimensions [16]. Tucker decomposition decomposes a tensor into a set of matrices and one core tensor. Assuming a third-order tensor M∈RD1×D2×D3 $ \mathcal M \in \mathbb R ^{D_1 \times D_2 \times D_3} $ , the Tucker decomposition expresses the tensor as
UAV Protocol Design for Computation Applications
Published in Fei Hu, Xin-Lin Huang, DongXiu Ou, UAV Swarm Networks, 2020
Thus, generalizing, for compressing X real numbers in the tensor is tciAX1.5Ci,where Ci is the computation capacity at node i and A is a constant. We use Tucker decomposition–based method (Figure 9.6) to compress data. First, we perform a mode 1 expansion of the tensor to form a matrix T(1) and split it into S1 smaller submatrices. The compression of submatrices is done by using a hybrid of Johnson-Lindenstrauss and rank revealing QR decomposition (RRQR)–based algorithm. Let the submatrix to be compressed be H∈ℝm×n. Let k be the numerical rank of the matrix H, which means that H can be approximated by a rank k matrix without much loss of information. Here, min (m, n) ≥ k. Let H˜ be a rank k matrix minimizing H−H˜F. There are two objectives for compression. The first is to determine k from H and then to compute H˜. We use an approximation of the SVD algorithm to determine H˜ and k. Set Y = ΩH, where Ω∈ℝa×m. The dimension a is such that k ≤ a. There are various choices for selecting Ω like random, Fourier, Hadamard and, more recently, error control coding (ECC) matrices. Here we use the ECC matrix as Ω. The method of generating Ω is the same as in [33]. With Y, we employ Hybrid-III RRQR decomposition to determine optimal k and decompose Y = QR. With k determined using Hybrid-III, set H˜ = Q˜R˜, where Q˜ = Q(:, 1: k) and R˜ = R(1: k,:). The Hybrid-III algorithm used and the numerical rank determination process is the same as in [34]. Thus Q˜ and R˜ form the adaptively compressed form of H. Figure 9.2 illustrates the proposed approach.
Robust Low-Rank Tensor Decomposition with the L2 Criterion
Published in Technometrics, 2023
Qiang Heng, Eric C. Chi, Yufeng Liu
The Tucker decomposition of with rank aims to find a core tensor and factor matrices for such that where the equality uses the more compact notation introduced in Kolda (2006). Sometimes the columns of are required to be orthogonal so that the columns of can be interpreted as the principal components of the nth mode, but we do not require this in this work. The tensor is said to have Tucker-rank if for .
Multiple Tensor-on-Tensor Regression: An Approach for Modeling Processes With Heterogeneous Sources of Data
Published in Technometrics, 2021
Mostafa Reisi Gahrooei, Hao Yan, Kamran Paynabar, Jianjun Shi
In the past few years, multilinear algebra (and, in particular, tensor analysis) has shown promising results in many applications from network analysis to anomaly detection and process monitoring (Sun, Papadimitriou, and Philip 2006; Sapienza et al. 2015; Yan, Paynabar, and Shi 2015). Nevertheless, only a few works in the literature use tensor analysis for regression modeling. Zhou, Li, and Zhu (2013) has successfully employed tensor regression using PARAFAC/CANDECOMP (CP) decomposition to estimate a scalar variable based on an image input. The CP decomposition approximates a tensor as a sum of several rank-1 tensors (Kiers 2000). Zhou, Li, and Zhu (2013) further extended their approach to a generalized linear model for tensor regression in which the scalar output follows any exponential family distribution. Li, Zhou, and Li (2013) performed tensor regression with scalar output using Tucker decomposition. Tucker decomposition is a form of higher order PCA that decomposes a tensor into a core tensor multiplied by a matrix along each mode (Tucker 1963). Yan, Paynabar, and Pacella (2019) performed the opposite regression and estimated point cloud data as an output using a set of scalar process variables. Recently, convex and nonconvex optimization frameworks have been offered to deal with HD multi-response tensor regression problems (Chen, Raskutti, and Yuan 2019; Raskutti, Yuan, and Chen 2019).
Image-Based Prognostics Using Penalized Tensor Regression
Published in Technometrics, 2019
Xiaolei Fang, Kamran Paynabar, Nagi Gebraeel
To further reduce the number of estimated parameters, coefficient tensors are decomposed using two widely used tensor decomposition techniques, CP and Tucker. The CP decomposition expresses a high-dimensional coefficient tensor as a product of several smaller sized basis matrices (Carroll and Change 1970). Tucker decomposition, however, expresses a high-dimensional coefficient tensor as a product of a low-dimensional core tensor and several factor matrices (Tucker 1966). Thus, instead of estimating the coefficient tensor, we only need to estimate its corresponding core tensors and factor/basis matrices, which significantly helps reduce the computational complexity and the required sample for estimation. The parameters of the reduced LLS regression model are estimated using the maximum likelihood (ML) approach. To obtain the ML estimates, we propose optimization algorithms for CP-based and Tucker-based methods. The optimization algorithms are based on the block relaxation method (De Leeuw 1994; Lange 2010), which alternately updates one block of the parameters while keeping other parameters fixed. Finally, the estimated LLS regression is used to predict and update the RUL of a functioning system. In the following, the details of the proposed methodology is presented.