Explore chapters and articles related to this topic
Next-Generation IoT Use Cases across Industry Verticals Using Machine Learning Algorithms
Published in Pethuru Raj Chelliah, Usha Sakthivel, Nagarajan Susila, Applied Learning Algorithms for Intelligent IoT, 2021
T. R. Kalaiarasan, Sruthi Anand, V. Anandkumar, A. M. Ratheeshkumar
Output: Although the information is inferred in the process phase, it is rendered into human understandable format in the output phase. The output in the form of text, graph, table, image, audio, video, etc., can be stored locally or in the cloud for further processing.
GP-GCN: Global features of orthogonal projection and local dependency fused graph convolutional networks for aspect-level sentiment classification
Published in Connection Science, 2022
Subo Wei, Guangli Zhu, Zhengyan Sun, Xiaoqing Li, TienHsiung Weng
The motivation of this section is to learn global dependency attention weights. So, we need to extract global information of words. We prune the corpus by orthogonal projection to reduce interference factors and give some words with special meanings appropriate weights. When we are committed to extracting the global features of words, a graph of the word co-occurrence and text graph is prevalent. Compared with the construction of the word co-occurrence graph (Peng et al., 2018), the text graph (Yao et al., 2019) is represented as nodes, which consists of words and sentences. Taking sentence nodes as a bridge, we can capture the long-term word relations in the whole corpus, thus further enhancing word representation learning. Therefore, we choose a text graph to input information into the GCN model.
A text classification method based on LSTM and graph attention network
Published in Connection Science, 2022
We regard each text as a graphic structure, which helps us learn the information between long-distance, discontinuous words. The purpose of this module is to convert each input text into a text graph. First, we use the natural language processing tool Stanford CoreNLP to analyse the dependency syntax of the input sentence (Jia & Wang, 2022), generate the syntactic dependency tree of the sentence, and construct the text graph according to the syntactic dependency tree obtained from the analysis. In order to enrich text features, text graph is regarded as undirected graph G =(V, E), which is represented by adjacency matrix A. V(|V| = n) and E are the sets of nodes and edges, respectively. Nodes represent words, and edges represent the existence of syntactic dependency between two words. The syntax dependency tree of the example sentence “The woman wrote a book” is shown in Figure 2. “Woman” is the subject of the predicate “wrote”, and “book” is the direct object of “wrote”. The adjacency matrix corresponding to this example is shown in Figure 3.