Explore chapters and articles related to this topic
Introduction
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
About 20 U.S. corporations banded together, jointly funding the Microelectronics and Computer Technology Corporation (MCC). U.S. antitrust laws were relaxed to facilitate this cooperation. MCC embraced AI, reportedly becoming the leading customer for both Symbolics and LMI. MCC projects included two parallel NLU efforts; work on intelligent advising; and CYC (as in encyclopedic, and later spelled Cyc), Douglas Lenat’s ambitious project to build a commonsense knowledge base that other programs could exploit. In 1984, Lenat predicted that by 1994 CYC would be intelligent enough to educate itself. Five years later, CYC was reported to be on schedule and about to “spark a vastly greater renaissance in [machine learning]” (Lenat 1989, p. 257).
Implementation of Mental-Image Based Understanding
Published in Masao Yokota, Natural Language Understanding and Cognitive Robotics, 2019
The major modules of conversation management system, as shown in Fig. 14-1, are roughly defined as follows, while it is still under development. (M1) Natural language understanding module (inside artificial intelligence)Interprets an input text into mental image description language (i.e., Lmd) expressions and selects the most plausible interpretation by employing all the kinds of knowledge, especially, word-meaning definitions intrinsic to this module.(M2) Thing-specific modelConsists of knowledge about each person, such as his/her belief, physical mobility, mental tendency, and social activity, or about each non-personal thing (e.g., flower shop, bank, and school), such as its function. Every knowledge piece is represented in Lmd (e.g., (14-17)–(14-19) below).(M3) Problem finding/solving module (inside artificial intelligence)Finds event gaps in Lmd expressions and cancels them by employing commonsense knowledge pieces, such as postulate of identity of assigned values-type1 (PV1) and postulate of identity of assigned values-type2 (PV2) and the thing-specific models. When problem solving is successful, the solution is sent to the Animation generator in the form of an Lmd expression. Otherwise, the dialogue partner or so is asked for further information.(M4) Animation generatorAnimates the solution in Lmd sent from the problem solver.Thedetailsaboutconversationmanagementsystemhavealreadybeen published in (Khummongkol and Yokota, 2016) and, therefore, here is to be focused on mental-image based understanding by conversation management system and its evaluation based on a psychological experiment.
Projection: a mechanism for human-like reasoning in Artificial Intelligence
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Projection provides a way to connect models of knowledge to lower-level data, and can bridge the symbolic and sub-symbolic, so it has implications for concepts and commonsense knowledge. Connecting commonsense knowledge bases to the world is a major challenge for AI. There have been numerous efforts at creating knowledge bases designed for commonsense reasoning, however a recent survey of work on benchmarks for commonsense reasoning found that: ‘Despite the availability of common and commonsense knowledge resources discussed in Section 3 [their paper], none of them are actually applied to achieve state-of-the-art performance on the benchmark tasks, and only a few of them are applied in any recent approaches’ (Storks et al., 2019, Sec. 4.4); instead they found that people train deep learning models with various datasets, and the knowledge is implicit in the learned model and pre-trained word embeddings.
Smart Karyotyping Image Selection Based on Commonsense Knowledge Reasoning
Published in Cybernetics and Systems, 2022
Yufeng Xu, Zhe Ding, Lei Shi, Juan Wang, Linfeng Yu, Haoxi Zhang, Edward Szczerbicki
Commonsense knowledge (CSK) is information that humans usually have that helps them make sense of situations in daily life (Ilievski et al. 2021). It has been predominately created directly from human input or extracted from the text (Lenat et al. 1990; Liu and Singh 2004; Carlson et al. 2010). CSK can generally be considered to be possessed by most people, and, according to the Gricean maxim (Grice 1975), it is usually omitted in (written or oral) communication. People take CSK for granted since they understand CSK naturally. CSK has an exceedingly large scale in both amount and diversity. Based on these characteristics, CSK is defined as a tremendous amount and variety of knowledge of default assumptions about the world, which is shared by (possibly a group of) people and seems so fundamental and obvious that it usually does not explicitly appear in people’s communications (Zang et al. 2013). Commonsense knowledge differs from encyclopedic knowledge as it deals with general knowledge rather than the details of specific entities (Tandon, Varde, and de Melo 2018). Most regular knowledge bases (KBs) contribute millions of facts about entities such as geopolitical entities or people but fail to provide fundamental knowledge, such as the notion that a child is likely too young to have a master’s degree in mathematics. The fact that commonsense knowledge is often implicit presents a challenge for question-answering (QA) approaches and automated natural language processing (NLP) in that the extraction and learning algorithms cannot rely on the commonsense knowledge being available directly in the text (Ilievski et al. 2021). Commonsense is elusive because it is scarcely and often only implicitly expressed, it is affected by reporting bias (Gordon and Van Durme 2013), and it may require considering multiple modalities.