Explore chapters and articles related to this topic
Logic
Published in Jay Liebowitz, The Handbook of Applied Expert Systems, 2019
First-order logic (FOL), which was first formalized by the German mathematician Gottlob Frege in 1879, has played an important role in the development of mathematics, computer science, and artificial intelligence. FOL extends the correct forms of reasoning of syllogisms and prepositional logic by defining a more general ontological view of the world in which the building blocks are objects and relations (predicates) among objects. First-order logic was introduced to AI by John McCarthy in 1958 (McCarthy). First-order logic assumes that the world consists of objects with their own properties that distinguish them from other objects, and relationships among the objects. Some of these relationships are functions that identify objects and some are relations that characterize a subset of objects. The domain can be any set. Universal and existential quantification is used to define relations over elements of potentially infinite domains. A term denotes an element of the domain; predicates can take one of the values true or false. First-order logic is more expressive than prepositional logic and we can express facts about the world that cannot be expressed with prepositional logic. Proof procedures for first-order logic have been developed that are sound and complete, although when one introduces particular theories (e.g., lists, arithmetic), they may become incomplete. Proof procedures in general are semi-decidable, which means that if a formula is valid there are proof methods that will establish this fact, but if the formula is not valid, these same methods may run forever without detecting this fact. Two of these procedures are resolution and mathematical induction.
Toward a General Logicist Methodology for Engineering Ethically Correct Roborts
Published in Wendell Wallach, Peter Asaro, Machine Ethics and Robot Ethics, 2020
Selmer Bringsjord, Konstantine Arkoudas, Paul Bello
We move up to first-order logic when we allow the quantifiers ∃x (“there exists at least one thing x such that …”) and ∀x (“for all x …”); the first is known as the existential quantifier, and the second as the universal quantifier. We also allow a supply of variables, constants, relations, and function symbols. Figure 2 presents a simple first-order-logic theorem in NDL that uses several concepts introduced to this point. It proves that Tom loves Mary, given certain helpful information.
The Semantics of Logic Programs
Published in Pascal Hitzler, Anthony Seda, Mathematical Aspects of Logic Programming Semantics, 2016
Of particular interest is the so-called Herbrand preinterpretation of a program. Its importance rests on the fact that, for many purposes, restricting to Herbrand preinterpretations causes no loss of generality.4 for example, in classical first-order logic, a set of clauses has a model if and only if it has a Herbrand model. Indeed, in many cases in the literature on the subject, discussions of logic programming semantics refer only to Herbrand (pre)interpretations and Herbrand models.
Towards knowledge graph reasoning for supply chain risk management using graph neural networks
Published in International Journal of Production Research, 2022
Edward Elson Kosasih, Fabrizio Margaroli, Simone Gelli, Ajmal Aziz, Nick Wildgoose, Alexandra Brintrup
Two approaches to perform reasoning in AI exist: symbolic and connectionist (Russell et al. 2010). Symbolic models are built on first-order logic to derive new information from the existing set of facts stored in a knowledge base. This class of algorithms is inherently interpretable as one can trace a finding back to the original set of knowledge that was used by the model to derive a conclusion. For instance, consider the logic statement ‘UPS uses GE Aviation's product’. To consider whether the statement is true, a symbolic model would be needed to trace the existence of paths in a set of facts in a knowledge base. If one knows that ‘GE Aviation supplies to Boeing’ (supplies_to (GE Aviation, Boeing)) and ‘Boeing supplies to UPS’ (supplies_to (Boeing, UPS)), then with certain assumptions one can reason that ‘UPS is a customer of GE Aviation’ (is_customer (UPS, GE Aviation)). One can thus interpret how this decision is made by tracing back to the original two facts that were used in the reasoning process. However, while symbolic approaches have the advantage of interpretability, constructing facts in knowledge base is quite laborious. Such logic reasoning approach also requires a knowledge base to be comprehensive, since if any single fact is missing (i.e. supplies_to (Boeing, UPS)), it can not derive conclusions (i.e. is_customer (UPS, GE Aviation)).
On searching explanatory argumentation graphs
Published in Journal of Applied Non-Classical Logics, 2020
The fundamental problem of learning a logical structure from cases in a probabilistic setting has been notably addressed in statistical relational learning (SRL) (Getoor & Taskar, 2007). SRL is at the crossroad of formalisms for logical reasoning, principled probabilistic and statistical approaches, and machine learning. It has successful integrations, see e.g. Probabilistic Relational Models (Getoor et al., 2007) and Markov Logic Networks (Richardson & Domingos, 2006). In these approaches, logics are used to structure in a qualitative manner the probabilistic relations amongst entities. Typically, (a subset of) first-order logic formally represents a qualitative knowledge describing the structure of a domain in a general manner (using universal quantification), while techniques from graphical models (Koller & Friedman, 2009), such as Bayesian networks (Getoor et al., 2007) or Markov networks (Richardson & Domingos, 2006), are applied to handle probability measures on the structured entities. Although argumentation plays no role in these approaches, and they concentrate on capturing data in its relational form (while we are only dealing with an abstract and flat data representation induced by our setting of abstract argumentation), one can be inspired by these combinations of graphical models and logic-based systems, and propose frameworks for probabilistic argumentation benefiting from the use of graphical models too. For example, a probabilistic abstract argumentation setting set forth by Riveret, Korkinof et al. (2015) and Riveret, Pitt et al. (2015), and which uses the same labelling framework for probabilistic argumentation employed in the paper, underlies a combination of abstract argumentation and the graphical model of Boltzmann machines. By learning the abstract argumentation graph and by combining it with some probabilistic graphical models, the learnt explanatory argumentation model would not only be used to explain the outcomes, it could also be employed to predict outcomes.