Explore chapters and articles related to this topic
Extracting Design Models from Natural Language Descriptions
Published in Don Potter, Manton Matthews, Moonis Ali, Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 2020
The natural language source documents are actually long strings of characters with frequent embedded blanks. Lexical analysis breaks the document string into lexical units or tokens. These tokens are words, acronyms, symbolic names, and punctuation. Lexical analysis may use morphological analysis to determine forms of verbs and number for nouns. The ASPIN system uses a simple scanner and dictionary for lexical analysis. Currently, morphological analysis is avoided by storing all interesting forms of words in the dictionary. The lexical analyzer is built into the parser, so during dictionary look-up, the grammatical categories of words are retrieved. Words not found in the dictionary are classified as identifiers: acronyms or symbolic names, which are the equivalent of proper nouns in standard English.
Development of a CNC interpretation service with good performance and variable functionality
Published in International Journal of Computer Integrated Manufacturing, 2022
Since G code is a context-sensitive language, its analysis part consists of lexical analysis, syntactic analysis, and semantic analysis. Lex&Yacc, an off-the-shelf compilation tool, is widely used to implement the analysis part (Xu et al. 2007; Xu and Ye 2007; Wang and Zhou 2017). The Lex tool implements the lexical analysis by splitting the source program into tokens in light of the lex specification of the source language. The Yacc tool implements the syntactic analysis by finding the hierarchical structure of the source program in light of the yacc specification of the source language. The lex specification creates a set of patterns of the source language, using the regular expression format. The yacc specification describes syntax rules and some semantic rules of the source language in the Backus-Naur Form (BNF) (Levine, Mason, and Brown 1995). However, the Lex&Yacc tool is developed in the mid-1970s and its development does not involve new technologies in computer science, such as object-oriented paradigm (OOP) (Ivantsov 2008). Disadvantages of the Lex&Yacc tool include that both lex specification and yacc specification have a fixed and complex programming structure and are written by special meta-languages, both Lex tool and Yacc tool need to install, the lexical analysis and the syntactic analysis are implemented separately, and more. ANTLR is another off-the-shelf compilation tool that is utilized to build G code interpreters (Yu 2008). Its parsers use a new parsing technology called Adaptive LL(*) (Parr 2013). LL(*) parsers are easier to read and write than LR-style parsers like Lex&Yacc, on the other hand, however, they are less powerful and accept a much smaller set of grammars (Aho et al. 2007).
Study on the classification problem of the coping stances in the Satir model based on machine learning
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2023
Xi Wang, Yu Zhao, Guangping Zeng, Peng Xiao, Zhiliang Wang
There were two types of features used in this study. The appearance and occurrence frequency of emotional vocabulary, which were obtained from the results of Chinese word segmentation using the Institute of Computing Technology Chinese Lexical Analysis System (ICTCLAS) on our psychological counselling database, were used as the first kind of feature to form the word feature training set of our text-word vector. The Satir model states that one’s coping positions are determined by his or her level of concern about ‘self,’ ‘others,’ and ‘situation’ factors.
MRLab: Virtual-Reality Fusion Smart Laboratory Based on Multimodal Fusion
Published in International Journal of Human–Computer Interaction, 2023
Hongyue Wang, Zhiquan Feng, Xiaohui Yang, Liran Zhou, Jinglan Tian, Qingbei Guo
The tests were conducted at MRLab utilizing a smart glove, a head-mounted device, and a computer with an Intel(R) Core(TM) i7-10875H CPU and Nvidia RTX 2060 GPU. The MRLab predominantly uses the Unity engine and C# programming language, and the analysis procedure is based on PyTorch deep learning framework using Pycharm. And, speech channel intention is analyzed through the Baidu speech recognition API and Chinese Jieba word segmentation lexical analysis for recognizing user speech.