Explore chapters and articles related to this topic
Matrix and Tensor Signal Modelling in Cyber Physical Systems
Published in Panagiotis Tsakalides, Athanasia Panousopoulou, Grigorios Tsagkatakis, Luis Montestruque, Smart Water Grids, 2018
Grigorios Tsagkatakis, Konstantina Fotiadou, Michalis Giannopoulos, Anastasia Aidini, Athanasia Panousopoulou, Panagiotis Tsakalides
Given a set of signal measurements, Y $ \mathbf Y $ , the K-SVD algorithm for dictionary learning [3], searches for a dictionary matrix D $ \mathbf D $ that can represent efficiently each example in this set, using strict sparsity constraints. K-SVD follows an iterative approach which involves alternating between the sparse coding of the examples based on the estimated dictionary, and the dictionary update step, in order to find a better representation for the data. Given an initialized dictionary D(0) $ \mathbf D ^{(0)} $ , the K-SVD algorithm performs a two-stage procedure at each iteration k, namely
Dictionary Learning
Published in Angshul Majumdar, Compressed Sensing for Engineers, 2018
K-SVD proceeds in two stages. In the first stage, it learns the dictionary, and in the next stage, it uses the learned dictionary to sparsely represent the data. K-SVD employs the greedy (sub-optimal) orthogonal matching pursuit (OMP) [30] to solve the l0-norm minimization problem approximately. In the dictionary learning stage, K-SVD proposes an efficient technique to estimate the atoms one at a time by using a rank-one update. The major disadvantage of K-SVD is that it is a relatively slow technique, owing to its requirement of computing the SVD (singular value decomposition) in every iteration.
Sparse representation-based classification for the planetary gearbox with improved KPCA and dictionary learning
Published in Systems Science & Control Engineering, 2020
SRC includes two stages: sparse coding and classification. First, testing samples are encoded under a predefined dictionary, and then we use the coding coefficients and dictionary to perform the task of fault classification. Based on the theory of sparse representation (Donoho, 2006), the selection of dictionary is especially important for classification performance of SRC. In Wright et al. (2009), the dictionary has been built by using all the training samples directly. Where the classification result of the method has been modest because some different categories of atoms have contained same factors. In addition, the computational complexity has been high since the number of atoms in the dictionary is numerous. To solve the above-mentioned problem, a process, namely dictionary learning, is needed to build a high-quality dictionary with fewer atoms. Of all the dictionary learning methods, K-singular value decomposition (K-SVD) has been widely used due to its strong convergence ability. An intelligent fault diagnosis method for rotating machinery has been proposed in Han, Jiang, Sun (2018), which use K-SVD to learn the dictionary. However, since the sample dimension in the above method is still high, we need to further reduce the dimension of the vectors before K-SVD. Extracting features can be used to reduce the dimension and the computational burden, which facilitates real-time fault diagnosis.