Explore chapters and articles related to this topic
Feature Generation for Graphs and Networks
Published in Guozhu Dong, Huan Liu, Feature Engineering for Machine Learning and Data Analytics, 2018
Yuan Yao, Hanghang Tong, Feng Xu, Jian Lu
Next, we divide existing graph feature generation methods into feature extraction approaches and feature learning approaches. For the feature extraction approaches, the key is to define the feature functions. For the feature learning approaches (i.e., graph embedding), we systematically reviewed the recent progress. The basic idea of feature learning approaches is to automatically learn the features for graph nodes, based on the assumption that the nodes in the same context tend to be similar to each other. In particular, we have introduced three types of basic graph embedding models (neural network method, factorization-based method, and regularization-based method), and summarized several extensions (based on the basic models) that have been studied in the literature. The extensions include whether to preserve community structure, whether to incorporate node attributes, whether to use supervision information, as well as several special types of graphs (i.e., directed graph, signed graph, and heterogeneous graph).
Floorplanning: Early Research
Published in Charles J. Alpert, Dinesh P. Mehta, Sachin S. Sapatnekar, Handbook of Algorithms for Physical Design Automation, 2008
A graph embedding is a particular drawing of a graph on a surface (which may often be a two-dimensional plane) such that its edges may intersect only at their endpoints. A graph is planar if there exists an embedding of it on a plane, whereas a plane graph is an embedding of a planar graph on a plane [5]. By definition, a rectangular floorplan is embedded on a plane, which therefore implies that its rectangular graph is a plane graph. As all intersections of cuts of F form T-junctions, all the internal faces of R, the rectangular graph of F, are triangles bounded by three edges. Hence, a rectangular graph is a plane triangulated graph [12,13].
Graph Edit Distance—Theory, Algorithms, and Applications
Published in Olivier Lézoray, Leo Grady, Image Processing and Analysis with Graphs, 2012
Graph embedding [92] aims at converting graphs into real vectors, and then operate in the associated space to make easier some typical graph-based tasks, such as matching and clustering [93, 94]. To this end, different graph embedding procedures have been proposed in the literature. Some of them are based on spectral graph theory. Others take advantage of typical similarity measures to perform the embedding task. For instance, a relatively early approach based on the adjacency matrix of a graph is proposed in [53]. In this work, graphs are converted into a vector representation using some spectral features extracted from the adjacency matrix of a graph. Then, these vectors are embedded into eigenspaces with the use of the eigenvectors of the covariance matrix of the vectors. This approach is then used to perform graph clustering. Another approach has been presented in [95]. This work is similar to the previous one, but in this case the authors use the coefficients of some symmetric polynomials, constructed from the spectral features of the Laplacian matrix, to represent the graphs in a vectorial form. Finally, in a recent approach [96], the idea is to embed the nodes of a graph into a metric space and view the graph edge set as geodesics between pairs of points on a Riemannian manifold. Then, the problem of matching the nodes of a pair of graphs is viewed as the alignment of the embedded point sets. In this section we will describe a new class of graph embedding procedures based on the selection of some prototypes and graph edit distance computation. The basic intuition of this work is that the description of regularities in the observation of classes and objects is the basis to perform pattern classification. Thus, based on the selection of concrete prototypes, each point is embedded into a vector space by taking its distance to all these prototypes. Assuming these prototypes have been chosen appropriately, each class will form a compact zone in the vector space.
A graph-based context-aware requirement elicitation approach in smart product-service systems
Published in International Journal of Production Research, 2021
Zuoxu Wang, Chun-Hsien Chen, Pai Zheng, Xinyu Li, Li Pheng Khoo
Deepwalk (Perozzi, Al-Rfou, and Skiena 2014) is one of the graph embedding methods for learning latent representations of the nodes in a graph. The Deepwalk algorithm consists of two main components, namely random walk generator and Skipgram (Mikolov et al. 2013). It uses truncated random walks to generate node sequences and treats the sequences as the equivalent of sentences. A random walk is denoted as a vector which is rooted at vertex within a window , i.e. . The chosen of the variant nodes in a walk is a stochastic process. Table 2 shows the pseudo code of generating a random walk in a requirement graph.
Vital node searcher: find out critical node measure with deep reinforcement learning
Published in Connection Science, 2022
Guanting Du, Fei Zhu, Quan Liu
In the real world, real graphs often exist in a high-dimensional form (Grover & Leskovec, 2016), which makes real graph data very difficult to handle and many existing methods (Perozzi et al., 2014) cannot be run directly on graph structures. The most important purpose of graph embedding algorithms (Tang et al., 2015) is to reduce the dimensionality of graph structure information and then embed the nodes of the graph into a d dimensional vector space (Yan et al., 2007). Graph embedding algorithms (Xu et al., 2020) are broadly classified into three categories: Factor-Decomposition Based methods, Random-Walk based methods and Deep-learning based methods.
Robust principal component analysis with projection learning for image classification
Published in Journal of Modern Optics, 2020
As described in Definition 1, the graph embedding technology is a dimensionality reduction method based on the local neighbourhood information between data samples. In this paper, we use a GE technology that is based on the Locality Preserving Projection (LPP). The optimal projection formula derived from LPP is as follows: where is a diagonal matrix whose entries are and represents a distance weighting coefficients between and in the original space. If is one of nearest neighbours of or vice versa, the thermal kernel function is adopted for calculating . Otherwise, is equal to . The formulation of is written as follows: where is a self-defined constant and denotes nearest neighbours of . The choice of is an open question. If is too small, the neighbourhood is small. Representing a data sample with a small neighbourhood results in insufficient neighbourhood data to reconstruct this sample, and the resulting error is increased. If is too large, the neighbourhood is large. A large neighbourhood may contain noise points and a large number of data samples from different classes, which results in insufficient neighbourhood data to reconstruct a sample, and the corresponding error is also increased.