Explore chapters and articles related to this topic
Natural Language Processing Associated with Expert Systems
Published in Jay Liebowitz, The Handbook of Applied Expert Systems, 2019
More recent work in the area of discourse analysis of written texts includes that of Hahn, Hobbs, and Schubert. In the TOPIC system, Hahn deals with the detection of the “thematic progression patterns” that represent the global structure of the input texts: this is mainly performed by analyzing the semantic links between the concepts represented by nominal expressions in the texts. TOPIC is then able to detect various “coherence phenomena” that bring about some congruent aspects of the different “chunks of knowledge” isolated inside an NL text, like the “constant theme,” i.e., the possibility to show that a variety of facts are, in reality, related to the same topic. Hobbs’ work on “interpretation as abduction” is based on the theoretical assumption that the coherence of discourse follows from semantic relationships between the information conveyed by the successive statements — abductive reasoning is based on inferences of the type: “if A implies B and B has been observed, hypothesize A,” and consists in computationally intensive inference techniques in a backward chaining style. After having executed the parse of a given statement into quasilogical form (see subsection 3.5.1), Hobbs’ TACITUS system tries to prove, allowing a minimal set of assumptions to be made, that this logical form stems from the preceding statements and from the adherence to some form of general explanatory schema for the global text. The optimum (most complete) set of assumptions bringing out the proof can then be regarded as the best “explanation” of this text. Schubert’s “Episodic Logic” (EL) is based on a Montague style of representation coupling syntactic forms and logical forms, while incorporating from situation semantics the idea that sentences describe situations (events, states, narrative episodes, eventualities, etc.). From the point of view of the text’s coherence, EL makes use, among other things, of “episodic variables,” which are used to make implicit temporal and causal relationships between situations fully explicit.
LIA: A Virtual Assistant that Can Be Taught New Commands by Speech
Published in International Journal of Human–Computer Interaction, 2019
LIA uses the Combinatory Categorial Grammar (CCG) (Steedman & Baldridge, 2011) parser to map user commands to logical forms, which can be handled and executed by the server. Using CCG is useful due to its tight coupling of syntax and semantics (Zettlemoyer & Collins, 2005). A CCG semantic parser is composed of a lexicon, a set of grammar rules, and a trained parameter vector. The lexicon maps words to syntactic categories, and species how each word can be combined with adjacent words and phrases during parsing to become both a new syntactic category and a logical form. The logical form represents the semantics of the sentence which later can be executed. For example, the word “set” is mapped to the syntactic category ((S\PP StringV)/MutableField), the argument type appears on the right of the slash (“MutableField” in the example) followed by another argument to the left of the slash (“PP StringV” in the example) and the return type on the left (“S” in the example above). In addition, the word “set” is mapped to a logical form (lambda x y (setFieldFromFieldVal x y)), which defines the function that should be operated when parsing the word using the parameters from the previous map. The set of grammar rules correspond to standard function operations, such as application and composition. Our grammar also includes a small number of unary rules that represent common implicit conversions between types. The trained parameter vector (which was trained using machine learning) is used by the CCG parser to disambiguate multiple possible parses and decide which parse is the most relevant at a given context.