Explore chapters and articles related to this topic
Conclusions and Future Work
Published in Shalom Lappin, Deep Learning and Linguistic Representation, 2020
Combinatory Categorial Grammar (CCG) (Steedman, 2000) augments the functions of simple categorial grammar with a set of non-logical combinator operations on categories. These increase the expressive power of the grammar, allowing it to raise or change the type of an expression, and to permute the order of constituents in a sentence. Like type logical grammars, CCG runs syntactic and semantic derivations in parallel, with the latter producing typed λ-terms, which can be interpreted model theoretically, as in Montague grammar (Montague, 1974).3
LIA: A Virtual Assistant that Can Be Taught New Commands by Speech
Published in International Journal of Human–Computer Interaction, 2019
LIA uses the Combinatory Categorial Grammar (CCG) (Steedman & Baldridge, 2011) parser to map user commands to logical forms, which can be handled and executed by the server. Using CCG is useful due to its tight coupling of syntax and semantics (Zettlemoyer & Collins, 2005). A CCG semantic parser is composed of a lexicon, a set of grammar rules, and a trained parameter vector. The lexicon maps words to syntactic categories, and species how each word can be combined with adjacent words and phrases during parsing to become both a new syntactic category and a logical form. The logical form represents the semantics of the sentence which later can be executed. For example, the word “set” is mapped to the syntactic category ((S\PP StringV)/MutableField), the argument type appears on the right of the slash (“MutableField” in the example) followed by another argument to the left of the slash (“PP StringV” in the example) and the return type on the left (“S” in the example above). In addition, the word “set” is mapped to a logical form (lambda x y (setFieldFromFieldVal x y)), which defines the function that should be operated when parsing the word using the parameters from the previous map. The set of grammar rules correspond to standard function operations, such as application and composition. Our grammar also includes a small number of unary rules that represent common implicit conversions between types. The trained parameter vector (which was trained using machine learning) is used by the CCG parser to disambiguate multiple possible parses and decide which parse is the most relevant at a given context.
Survey on frontiers of language and robotics
Published in Advanced Robotics, 2019
T. Taniguchi, D. Mochihashi, T. Nagai, S. Uchida, N. Inoue, I. Kobayashi, T. Nakamura, Y. Hagiwara, N. Iwahashi, T. Inamura
To conduct the logical inferences described earlier, syntactic parsing should be perfected in advance to be suitable for real-world communication. For example, the robot in Figure 1 is inferring the latent syntactic structure of the sentence given, and understands it needs to bring ‘the bottle’, not ‘the kitchen'. Syntactic parsing is indispensable for semantic parsing, semantic role identification, and other semantics-driven tasks in NLP. Syntactic parsing can essentially be categorized as follows, in the current practice of NLP [58]: (a) dependency parsing, (b) constituent parsing, such as context-free grammars (CFG), tree adjoining grammars (TAG), and (c) combinatory categorial grammars (CCG).