Explore chapters and articles related to this topic
Deep Probabilistic Machine Learning for Intelligent Control
Published in Alex Martynenko, Andreas Bück, Intelligent Control in Drying, 2018
In this introductory chapter we want to outline some of the ideas and technologies that are behind the developments of AI. The acronym AI stands for artificial intelligence, which is a diverse field of study in itself. Most of it is about strategies and technologies to enable applications that require advanced control. AI is sometimes divided into two approaches, namely symbolic AI and sub-symbolic AI. Symbolic approaches are concerned with reasoning systems based on predefined knowledge representations that are encapsulated in symbols. Such symbolic systems can then use some form of an explicit logic method for inference to derive some conclusions. This type of AI has dominated much of the AI field at least since the 1970s. This area is now often called the good old-fashioned AI, or GOFAI for short.
Main Term: “Anthropic AI”
Published in Sam Freed, AI and Human Thought and Emotion, 2019
Cognitive models (such as SOAR) and classic symbolic AI propose a computational model for various faculties that underlie individual human activity (Laird & Rosenbloom, 1996; Sun, 2008). These models (when used as a scientific tool rather than for technology) are verified by comparing their performance to human performance in similar tasks. In technology, it is the basis for some of Good Old-Fashioned AI (GOFAI), especially heuristic and satisficing algorithms.
An Overnight Sensation, after 60 Years
Published in Chace Calum, Artificial Intelligence and the Two Singularities, 2018
For the next decade, research was therefore focused on an approach called symbolic AI, in which researchers tried to reduce human thought to the manipulation of symbols, such as language and maths, which could be made comprehensible to computers. This was dubbed good old-fashioned AI, or GOFAI.
Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities
Published in Information Systems Management, 2022
Christian Meske, Enrico Bunde, Johannes Schneider, Martin Gersch
Aforementioned systems, such as knowledge-based or expert systems, are referred to as symbolic AI, or Good Old Fashioned AI (GOFAI), since human knowledge was instructed through rules in a declarative form (Haugeland, 1985). With the turn of the millennium and discussions of “new-paradigm intelligent systems” (Gregor & Yu, 2002) like artificial neural networks, it was recognized, that the latter are typically neither capable to inherently declare the knowledge they contain, nor to explain the reasoning processes they go through. In that context, it was argued, that explanations could be obtained indirectly, e.g., through sensitivity analysis (Rahman et al., 1999), which derives conclusions from output variations caused by small changes of a particular input (Gregor & Yu, 2002). Besides only very few examples (e.g. Eiras-Franco et al., 2019; Giboney et al., 2015; Martens & Provost, 2014), since then most of the publications1 on explainability of AI systems, or “Explainable Artificial Intelligence” (XAI), have been published outside of the information systems community, mostly in computer science. As one can see, the existing IS literature is very valuable but with its peak in the 1990ies and early 2000s also comparatively dated, which motivates our call for more IS research on the explainability of AI.