Explore chapters and articles related to this topic
A Brief History of Artificial Intelligence
Published in Ron Fulbright, Democratization of Expertise, 2020
In defining computability, Turing (1936) described what would become known as Turing machines. A Turing machine is an abstract machine, able to manipulate information in the form of symbols, and simulate any algorithm. Shannon (1948) established the field of information theory—the study of the quantification, storage, and communication of information—but Hartley (1928) and Nyquist (1924; 1928) introduced the idea of information as a quantifiable and measurable quantity. Bush (1945) imagined extending and enhancing human memory with an artificial associative memory system called the Memex. Turing (1950) discussed if machines themselves could think and offered the “imitation game” (later to be known as the “Turing Test”) to decide if a machine is exhibiting intelligence. McCarthy et al. (1955) coined the term artificial intelligence.
Formal Machine Learning Algorithms
Published in Richard M. Golden, Statistical Machine Learning, 2020
Any computation that can be simulated on a digital computer can be simulated on a Turing machine. A Turing machine is a special type of computer that updates states in a discrete state space in discrete-time. The goal of this chapter is to provide a formal definition of a large class of machine learning algorithms that include Turing machine computers as an important special case. These computing machines operate in either continuous state spaces and possibly even continuous-time. Examples of computing machines that cannot be exactly represented as Turing machines are common in the machine learning literature. For example, gradient descent algorithms generate updates to a learning machine’s parameter values in an uncountable infinite state space while a Turing machine (like a digital computer) assumes the state space is finite.
Hamilton Cycles
Published in Kenneth H. Rosen, Graphs, Algorithms, and Optimization, 2005
The theory of NP-completeness is phrased in terms of decision problems, that is, problems with a yes or no answer, (e.g., “is G hamiltonian?”). This is so that an algorithm can be modeled as a Turing machine, a theoretical model of computation. Although Turing machines are very simple, they can be constructed to execute all the operations that characterize modern random access computers. Turing machines do not usually produce output, except for yes or no. Thus, a Turing machine can be constructed to read in a graph, and perform an exhaustive search for a hamilton cycle. If a cycle exists, it will be found, and the algorithm will report a yes answer. However, the exhaustive search will tend to take an exponential amount of time in general.
Natural language processing (NLP) in management research: A literature review
Published in Journal of Management Analytics, 2020
Yue Kang, Zhao Cai, Chee-Wee Tan, Qian Huang, Hefu Liu
The germination period encompasses the beginnings of NLP research. Alan Turing first proposed the concept of the ‘Turing Machine’ in 1936. The ‘Turing Machine’, as the theoretical origin for the modern computer, prompted the birth of the electronic computer in 1946, providing a material basis for machine translation and, subsequently, NLP. Given the needs of machine translation, foundational research in NLP was carried out during this period. In 1948, Shannon applied the probabilistic model of discrete Markov processes to the automation of the description language. He then applied the concept of entropy in thermodynamics to the probabilistic algorithm of language processing to measure the amount of information contained in human language. In the early 1950s, Kleene investigated finite automata and regular expressions and, in 1956, Chomsky proposed a context-free grammar and applied it to NLP. Such work directly led to the generation of two NLP techniques based on rules and probability. These two methods have sparked decades of disputes regarding rule-based methods and probabilistic methods in NLP. The birth of artificial intelligence in 1956 further opened a new chapter in NLP as AI gradually combined with other NLP technologies over the next few decades to enrich the technical means of NLU and NLG, thereby broadening the social application of NLP.
A computational journey in the true north
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
One of the dogmas in Computer Science is the principle of computational universality, and the attendant principle of simulation: ‘Given enough time and space, any general-purpose computer can, through simulation, perform any computation that is possible on any other general-purpose computer.’ Statements such as this are commonplace in the computer science literature, and are served as standard fare in undergraduate and graduate courses alike [98–100]. Sometimes the statement is restricted to the Turing Machine, and is referred to as the Church-Turing Thesis, as in: ‘A Turing machine can do everything that a real computer can do’ [101]. Other times the statement is made more generally about a Universal Computer (which is not to be confused with the more restricted Universal Turing Machine), as in: ‘It is possible to build a universal computer: a machine that can be programmed to perform any computation that any other physical object can perform’ [102]. I consider it one of my most meaningful contributions to have shown that such a Universal Computer cannot exist; this is the principle of nonuniversality in computation [103–111].