Explore chapters and articles related to this topic
Role of Dimensionality Reduction Techniques for Plant Disease Prediction
Published in Utku Kose, V. B. Surya Prasath, M. Rubaiyat Hossain Mondal, Prajoy Podder, Subrato Bharati, Artificial Intelligence and Smart Agriculture Technology, 2022
Muhammad Kashif Hanif, Shaeela Ayesha, Ramzan Talib
Isomap is an unsupervised learning method that uses non-linear functions to transform higher dimensions of data into lower dimensions (Du et al., 2009). Isomap saves the geo and pairwise distances of data points (Najafi et al., 2016). They transform high dimensions into a reduced feature set while preserving manifold structures of data (Gao et al., 2017). Isomap preserves the real aspects of data in lower dimensions and enhances the classification accuracy of the ML models (Huang et al., 2018). The classical version of Isomap cannot cope with generalization property and labelled data, however, its variants, such as the supervised discriminative Isomap, extend its functionality and make it suitable for multi-nature data (Qu et al., 2021). Another extension called a fast Isomap was introduced to improve the accuracy of graphs when embedding data to lower dimensions (Yousaf et al., 2021).
Process Monitoring
Published in Jose A. Romagnoli, Ahmet Palazoglu, Introduction to Process Control, 2020
Jose A. Romagnoli, Ahmet Palazoglu
Novel data clustering and dimensionality reduction tools drawn from computer science literature can also assist in extracting knowledge from process data. For example, Isomap20 performs nonlinear dimensionality reduction by learning the nonlinear topology of the data by fitting a graph structure to the dataset and then projecting the data with non-metric multidimensional scaling (MDS) based on the ordering. Additionally, Spectral Embedding, also known as Laplacian Eigenmaps,21 is a manifold nonlinear dimensionality reduction technique using the Laplacian of a graph based on the topology of the data set to nonlinearly project data in a manner that optimally preserves the local neighborhood information and emphasizes clustering structures in the data.
Cooperative Localization in Wireless Sensor Networks via Kernel Isomap
Published in Chao Gao, Guorong Zhao, Hassen Fourati, Cooperative Localization and Navigation, 2019
Jyoti Kashniyal, Shekhar Verma, Krishna Pratap Singh
Isomap is a dimensionality reduction technique based on classical multidimensional scaling. It performs collaborative localization using the concept of geodesic distances for limited communication range sensors. However, unlike distributed algorithms, it operates in a batch mode and is inefficient in localizing a newly arrived node in the network (Li et al., 2013). Kernel Isomap (Choi and Choi, 2004) possess the generalization property using the similar spirit as in kernel principal component analysis (KPCA) (Schölkopf, 1997). It has a data-driven, positive semidefinite kernel matrix obtained after adding a constant to the geodesic distance kernel matrix. Similar to kernel Isomap, an out-of-sample extension of Isomap (Bengio et al., 2004), and Landmark Isomap (Silva and Tenenbaum, 2003) have the capability of positioning newly arrived nodes. However, in the latter two, the kernel matrix is not guaranteed to be positive definite. Other kernel-based methods like kernel principal component analysis (PCA) and kernel locality preserving projection (KLPP) (Wang et al., 2009) use an externally defined kernel like Gaussian, polynomial, etc., whose selection is based on network parameters.
NNNPE: non-neighbourhood and neighbourhood preserving embedding
Published in Connection Science, 2022
Kaizhi Chen, Chengpei Le, Shangping Zhong, Longkun Guo, Ge Xu
Nonlinear dimensionality reduction approaches, in contrast to linear dimensionality reduction techniques, deal with complex nonlinear data, thus attracting widespread attention. Many nonlinear dimensionality reduction algorithms have been proposed in recent decades, such as isomaps (Tenenbaum et al., 2000), LLE (Roweis & Saul, 2000), Laplacian eigenmaps (LE) (Belkin & Niyogi, 2001), Hessian LLE (Donoho & Grimes, 2003), and LTSA (Zhang & Zha, 2004). These algorithms utilise nonlinear low-dimensional manifolds from sample datasets that are inherent in high-dimensional space. Isomap is a global approach in low-dimensional space that seeks to retain pairwise geodesic distances among data points. By contrast, other techniques are local methods. LLE and LE endeavour to keep the local geometry of data, and neighbour points on the high-dimensional are regarded as neighbouring on the low-dimensional manifold. Hessian LLE is similar to LE in that it replaces the manifold Laplacian with the manifold Hessian. Meanwhile, LTSA is a technique that uses the local tangent space of each sample to characterise the local features of high-dimensional data (Van Der Maaten et al., 2009). These nonlinear dimensionality reduction methods have the advantage of finding manifold embedding owing to the highly nonlinear manifold of real-world data. However, they cannot be defined everywhere.
Dimensionality reduction and classification for hyperspectral image based on robust supervised ISOMAP
Published in Journal of Industrial and Production Engineering, 2022
Shengfeng Ding, Colin Arthur Keal, Lizhou Zhao, Dan Yu
The basic idea of ISOMAP is that when the distribution of samples has a low-dimensional manifold structure, the representation of data sets in low-dimensional space can be obtained by isometric mapping. The core of the algorithm is that Euclidean distance is replaced by geodetic distance to represent pairs of distance between data samples, and MDS is used to project data points from high-dimensional space into low-dimensional nonlinear topological space, get the low-dimensional manifold, in which geodesic distance between samples is kept unchanged, finally low-dimensional embedding coordinates of samples are obtained. In MDS, distance measurement is the core of the whole algorithm, and Euclidean distance measurement can only reflect the linear structure of data. If original high-dimensional data are sampled from a highly distorted manifold, this metric fails to capture the intrinsic structure of data. The algorithm steps are divided into three steps:
An integrated manifold learning approach for high-dimensional data feature extractions and its applications to online process monitoring of additive manufacturing
Published in IISE Transactions, 2021
Chenang Liu, Zhenyu (James) Kong, Suresh Babu, Chase Joslin, James Ferguson
ISOMAP (Tenenbaum et al.,2000) is one of the most popular nonparametric dimension reduction approaches in manifold learning that aims to preserve the metrics at all scales, thereby offering an appropriate low-dimensional space embedding. The key idea to implement ISOMAP is to estimate the geodesic distance (see Figure 2) between faraway points along the underlying structure of the data instead of using the traditional Euclidean distance when constructing the distance/similarity matrix. More specifically, for an original data in space an initial distance matrix can be firstly obtained based on a specified distance metric such as Euclidean distance as shown in Equation (2): where and are any two points from the data set in the original high-dimensional space Subsequently, to further estimate the geodesic distance between and a undirected neighborhood graph based on is constructed, in which the nodes are represented by all the points in the data set and the connection relationship (i.e., the edges in ) between each pair of nodes is summarized by a similarity matrix S, as shown in Equation (3): where is a pre-defined threshold, termed the neighborhood size, to determine the weight of the edges. Then the geodesic distance between two points and i.e., two nodes in can be approximated by using efficient shortest path algorithms in graph theory, such as Dijkstra's algorithm and Floyd's algorithm (Thulasiraman and Swamy, 2011). Essentially, matrix stores the estimated geodesic distance between any pair of the points in the data set, based on which the low-dimensional embedding in feature space can be implemented by using some effective linear transformation-based dimension reduction methods, e.g., MDS (Torgerson, 1952). For more details of this manifold learning approach, please refer to Tenenbaum et al. (2000).