Explore chapters and articles related to this topic
Data Mining – Unsupervised Learning
Published in Rakesh M. Verma, David J. Marchette, Cybersecurity Analytics, 2019
Rakesh M. Verma, David J. Marchette
The idea of spectral clustering is to partition a graph. Assume that we have a set of n data items 1, 2, …n or “points,” and we have a pairwise similarity function sij (or distance function dij). Then we can turn this into an undirected graph with the points as vertices in several different ways. The ε neighborhood graph, an unweighted graph in which we connect all points whose pairwise distance is at most ε.k nearest neighbor graph. Here we have two choices: (i) we connect vertices i and j provided either i is among the k-nearest neighbors of j or vice versa. This is called the k nearest neighbor graph. (ii) we connect vertices i and j provided both i is among the k-nearest neighbors of j and vice versa. This is called the mutual k nearest neighbor graph. In either approach, the weight of the edge {i, j} is wij = sij.The fully connected graph, a weighted graph in which we connect every pair of points i, j, again wij = sij.
Machine-Level Case Study: Fingerprint of Industrial Motors
Published in Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza, Industrial Applications of Machine Learning, 2019
Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza
The parameters of the SM algorithm are similarity graph (affinity matrix) and number of clusters K.Similarity graph, which is built using the pairwise similarities or pairwise distances from instances in D $ \mathcal D $ . In Section 2.3.3, three different similarity graphs have been described: ϵ $ \epsilon $ -neighborhood graph, k-nearest neighbor graph, and the fully connected graph. In this case study, the k-nearest neighbor graph has been used.Number of clusters K, chosen as described for the K-means algorithm.
*
Published in Fadi Al-Turjman, Wireless Sensor Networks, 2018
In non-grid-based deployment, we try to construct a smaller set of candidate positions by selecting a set of points that have a geometrical property, which makes them more likely to be used as a location for SPRNs. This scheme constructs a set of edges connecting WSN sectors, and the search space of SPRN locations is limited to a set of points along those edges. We use the Delaunay Triangulation (DT) and the Steiner tree of the virtual SSNs to construct these edges. These two geometrical structures possess several nice properties that make them good sources of potential SPRN locations. The DT, for example, is a super-graph of both the Nearest Neighbor Graph (NNG) and the Euclidean MST. The Steiner tree also has a nice property of connecting a set of points (i.e., the SSNs here) with a network of edges with a minimum length. These properties seem to be useful in federating WSN sectors; it is intuitive that SPRNs will be used to connect sectors that are close to each other, hence the use of NNG edges. Meeting the limited budget of SPRNs requires using minimum length spanning edges, hence the use of Steiner tree and MST.
Health assessment of wind turbine based on laplacian eigenmaps
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2020
Tao Liang, Zhaochao Meng, Jie Cui, Zongqi Li, Huan Shi
(1) Constructing Neighbor Graphs: For each sample point , construct its k-nearest neighbor set. If the sample points and are nearest neighbors, they are connected by one edge to form a k-nearest neighbor graph.