Explore chapters and articles related to this topic
Legal Personhood for Artificial Intelligences
Published in Wendell Wallach, Peter Asaro, Machine Ethics and Robot Ethics, 2020
What is the relevance of these legal thought experiments for the debate over the possibility of artificial intelligence? A preliminary answer to this question has two parts. First, putting the AI debate in a concrete legal context acts as a pragmatic Occam’s razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context.3 Philosophical claims that no program running on a digital computer could really be intelligent are put into a context that requires us to take a hard look at just what practical importance the missing reality could have for the way we speak and conduct our affairs. In other words, the legal context provides a way to ask for the “cash value” of the arguments. The hypothesis developed in this Essay is that only some of the claims made in the debate over the possibility of AI do make a pragmatic difference, and it is pragmatic differences that ought to be decisive.4
Introduction to Artificial Intelligence
Published in Richard E. Neapolitan, Xia Jiang, Artificial Intelligence, 2018
Richard E. Neapolitan, Xia Jiang
Cognitive science is the discipline that studies the mind and its processes. It concerns how information is represented and processed by the mind. It is an interdisciplinary field spanning philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology, and emerged as its own discipline somewhat concurrently with AI. Cognitive science involves empirical studies of the mind, whereas AI concerns the development of an artificial mind. However, owing to their related endeavors, each field is able to borrow from the other.
Introduction
Published in Mireille Hildebrandt, Antoinette Rouvroy, Law, Human Agency and Autonomic Computing, 2011
The normative implications of technological innovation have not always been on the forefront of research in the philosophy of technology (Rip 2003). This may be due to a fear of being associated with either utopian or dystopian visions of a reified Technology. Reiterating the idea that though ‘technology is neither good nor bad, it is never neutral’, we contend that there is an urgent need to assess the normativities triggered by technological change without however falling prey to moralism. Tracing potential normative impacts means investigating what types of behaviours are invited or inhibited, enforced or ruled out by a particular technological device or infrastructure (Hildebrandt 2008b; Verbeek 2005, 2006). This line of research is clearly related to research into the mediation of perception and cognition that is performed by the technologies we use and live with. The moral evaluation of these normative impacts is another matter, and though it requires a delineation of how behavioural patterns are reconfigured by a specific technology we should not leap into moral condemnation or celebration before carefully investigating the normative impacts of specific technologies. A philosophy of technology that is aware of the normative implications of specific technological innovations could benefit from the practical demands that inform legal research, because – other than ethics and moral philosophy – law forces one to take a position and consider the practical consequences. For precisely this reason legal philosopher Solum has contributed to the discourse on whether artificial intelligences are ‘really’ intelligent by investigating whether they could function as a trustee and whether they might qualify for constitutional protection (Solum 1992: 1232–33): First, putting the AI debate in a concrete legal context acts like an Occam's razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context.Second, and more controversially, we can see the legal system as a repository of knowledge, a formal accumulation of practical judgements. The law embodies core insights about the way the world works and how we evaluate it. (…) Hence, transforming the abstract debate over the possibility of AI into an imagined hard case forces us to check our intuitions and arguments against the assumptions that underlie social decisions made in many other contexts.
Design science research in construction management: multi-disciplinary collaboration on the SightPlan system
Published in Construction Management and Economics, 2020
The conception and development of the SightPlan system indeed leveraged knowledge and expertise in two domains: the domain of construction management (Ray’s strength) and the domain of cognitive science (Barbara’s strength). Each of these, in and of their own, are very broad. Construction management needs no further definition in this journal. However, what is meant by cognitive science warrants clarification especially in light of the ongoing discussion on the role of social sciences in regard to research in CME (e.g. Betts and Lansley 1993, Seymour et al.1997, Raftery et al.1997, Runeson 1997, Rooke et al.1997, Chau et al.1998, Voordijk 2009, Volker 2019), a topic revisited later in this paper. “Cognitive science is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology. Its intellectual origins are in the mid-1950s when researchers in several fields began to develop theories of mind based on complex representations and computational procedures” (Stanford 2018). As such, cognitive science is highly relevant to the domain of CME. This combination and leveraging of two domains of expertise offered a rich setting for model development, experimentation, learning, and generation of new knowledge.