Explore chapters and articles related to this topic
Introduction
Published in Joshua C. Gellers, Rights for Robots, 2020
As a rejoinder to Turing’s test, Searle (1980) presented the “Chinese Room” argument (McGrath, 2011, p. 134). In this thought experiment, Searle imagines himself locked in a room where he receives a large amount of Chinese writing. Searle admittedly does not know any Chinese. He then receives a second delivery of Chinese writing, only this time it includes instructions in English (his mother tongue) for matching the characters in this batch with characters from the first batch. Finally, Searle obtains a third document written in Chinese that includes English language instructions on how to use the present batch to interpret and respond to characters in the previous two. After these exchanges, Searle also receives stories and accompanying questions in English, which he answers all too easily. Through multiple iterations involving the interpretation of Chinese characters, along with receipt of continuously improved instructions written by people outside the room, Searle’s responses are considered indistinguishable from those of someone fluent in Chinese and just as good as his answers to the questions in English.
Legal Personhood for Artificial Intelligences
Published in Wendell Wallach, Peter Asaro, Machine Ethics and Robot Ethics, 2020
John Searle questioned the relevance of Turing’s Test with another thought experiment, which has come to be known as the Chinese Room.19 Imagine that you are locked in a room. Into the room come batches of Chinese writing, but you don’t know any Chinese. You are, however, given a rule book, written in English, in which you can look up the bits of Chinese, by their shape. The rule book gives you a procedure for producing strings of Chinese characters that you send out of the room. Those outside the room are playing some version of Turing’s game. They are convinced that whatever is in the room understands Chinese. But you don’t know a word of Chinese, you are simply following a set of instructions (which we can call a program) based on the shape of Chinese symbols. Searle believes that this thought experiment demonstrates that neither you nor the instruction book (the program) understands Chinese, even though you and the program can simulate such understanding.20
Tool vs. agent: attributing agency to natural language generation systems
Published in Digital Creativity, 2018
Recall philosopher John Searle’s renowned ‘Chinese Room’ thought experiment, wherein an individual with no understanding of the Chinese language is locked in a room and given numerous batches of Chinese writing. The individual is then given a set of English-language rules (a ‘program’) that correlate particular Chinese input with appropriate Chinese responses. Working in this way, Searle imagines, the individual produces output indistinguishable from those of native Chinese speakers. Through the Chinese Room argument, Searle argues that computer program (such as NLG systems) may manipulate formal symbols without understanding them to produce understandable texts, despite the program themselves being incapable of intentionality. Nevertheless, ‘we often attribute “understanding” and other cognitive predicates by metaphor and analogy to cars, adding machines and other artifacts, but nothing is proved by such attributions’, Searle (1980, 419) explains: The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality; our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them.
A digital distraction? The role of digital tools and distributed intelligence in woodblock printmaking practice
Published in Digital Creativity, 2021
Pye (1995, 20) described the ‘workmanship of risk’, applied to manual outcomes where a design or object can be ruined at any time. However, digital technologies present precision, and this can be seen to erase the ‘charm of mistakes’. If the image is drawn with vector tools then there must be no error in the simultaneously auto-generated CNC code. The artist may decide to deliberately programme in errors to allow for a sense of ‘humanness’ and the CNC device will not autocorrect these. A CNC device does only what it is told—programmed perfection or programmed imperfection. The machine makes no artistic judgement. This lack of intentionality is an outcome of Searle’s (1980) Chinese Room argument, which states that computer programmes can only perform the manipulation of symbols and are unable to give them any meaning. If digital tools follow the path of forced movements, can we then say these tools have ‘knowledge’? From a technological view, these are particular computer programmes written with the intention that software agents can operate machines to execute actions like human printmakers. However adept these systems appear, they possess a brittle intelligence that only understands a slice of the world. The laser cutter can only base its actions on local information. This ‘knowledge’ depends on the messages received and the print events described. When we consider the task of performing coordinated actions among a number of agents in a distributed environment, it does not suffice to talk only about the individual human artist’s knowledge, rather, we need to look at the states of knowledge across groups of agents. In Japan, it was the publisher who planned the route, timing and final issue of the print. In comparison to this powerful principal agent the other artisans fall back into the role of executing agents—much as we might argue of the software code and laser cutter.
The Making of Imago Hominis: Can We Produce Artificial Companions by Programming Sentience into Robots?
Published in The New Bioethics, 2022
A celebrated objection was raised by American philosopher John Searle (1980, pp. 1–2) through his Chinese Room Argument: Suppose a person was locked in a room, and he knew no Chinese. He was then given batches of Chinese characters and a manual containing the rules by which he may correlate one set of characters with another set of characters purely by their different shapes. He could thus receive questions in Chinese as inputs and, by following the instructions in the manual, assemble some Chinese characters as outputs. Suppose, further, that to every question asked, this person followed the manual accurately, and the manual was so well composed that the ‘answers’ would convince any native Chinese speaker that the person in the room is also a native Chinese speaker. Searle then compares the person in the Chinese Room to a computer, since both ‘produce the answers by manipulating uninterpreted formal symbols’ (Searle 1980, p. 2). His main point is that the usage of language involves not only syntax, but also semantics, and the latter cannot be reduced to the operation of the former. In other words, a computer could be programmed to behave as if it understands a sentence, although it has no such understanding. As a parallel, imagine a robot with a sensory system was designed, such that it could receive signals corresponding to pain as inputs and, by following the instructions of its programme, generate various behaviours usually associated with pain as outputs. Imagine this robot performed so well that it could convince any human being that it is a live human sufferer. How are the outputs in the latter case suddenly more real? Analogously, such ‘pain’ signals probably resemble syntax more than semantics. Following the same logic stated in the Chinese Room Argument, it seems more plausible that this robot merely simulates, rather than experiences pain. A sentient robot ‘knowing’ pain may be compared to the cancer expert (Section 2.1) ‘knowing’ cancer: it could know everything about pain, but it is not ‘personally familiar’ with pain.