Explore chapters and articles related to this topic
Towards the Hardest Things
Published in Alessio Plebe, Pietro Perconti, The Future of the Artificial Mind, 2021
Alessio Plebe, Pietro Perconti
Top-down approaches start from a pre-existing ethical framework. So let us assume that someone is either a utilitarian or a Kantian committed to a deontological ethic. Based on his or her own biases, the utilitarian or Kantian will try to specify a set of obligations that intelligent machines should obey; in other words, he or she will try to specify constraints that the designers of intelligent machines, who are also moral agents, must obey if they are to design artifacts that are socially acceptable. The goal can be achieved either by building utilitarian or Kantian machines, or by designing machines whose behavior is compatible with utilitarian or Kantian ethics. Bottom-up approaches, on the other hand, emphasize the development of moral sensibility through machines, and take a stance similar to evolutionary theory, imagining the path that machines should take to acquire the moral sensibility that humans have gained through millennia of biological evolution and nurture. Such an approach can be found in developmental robotics, which has moved away from the challenge of developing cognitive skills but is rapidly expanding into the development of moral skills (Cangelosi and Schlesinger, 2015).
World models and predictive coding for cognitive and developmental robotics: frontiers and challenges
Published in Advanced Robotics, 2023
Tadahiro Taniguchi, Shingo Murata, Masahiro Suzuki, Dimitri Ognibene, Pablo Lanillos, Emre Ugur, Lorenzo Jamone, Tomoaki Nakamura, Alejandra Ciria, Bruno Lara, Giovanni Pezzulo
How can we develop robots that can autonomously explore the environment, acquire knowledge, and learn skills continuously? Creating autonomous cognitive and developmental robots that can co-exist in our society has been considered an ultimate goal of cognitive and developmental robotics and artificial intelligence (AI) since the inception of these fields. Autonomous robots that can develop in the real world and collaborate with us may also be called embodied artificial general intelligence (AGI). The recent success of artificial intelligence depends primarily on large-scale human-annotated data. However, human infants can acquire knowledge and skills from sensorimotor information through physical interactions with their environment and social interactions with others (e.g. their parents or caregivers). Importantly, the aim is to build robots that can continuously develop through embodied interactions, their learning process must be strongly based on their own sensorimotor experiences. This autonomous learning process that occurs throughout development is also referred to as continual or lifelong learning [1–3], and is considered the foundation for the emergence of both individual and social abilities necessary for robots with adaptive and collaborative capabilities.
Special issue on world models and predictive coding in robotics (Part I)
Published in Advanced Robotics, 2023
Tadahiro Taniguchi, Dimitri Ognibene, Lorenzo Jamone, Emre Ugur, Pablo Lanillos, Alejandra Ciria, Masahiro Suzuki, Shingo Murata, Yoshihiro Nakata, Tomoaki Nakamura
The pursuit of autonomous cognitive-developmental robots has long been a high aspiration within the field of robotics, and creating cognitive dynamics that enable robots to learn and adapt through sensorimotor interactions is a key challenge in cognitive and developmental robotics. This special issue focuses on the concept of world models, which allow robots to predict future sensory observations and optimize behavior based on the sensory consequences of actions, in line with the framework of predictive coding and the free energy principle in contemporary neuroscience.
On building a person: benchmarks for robotic personhood
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2020
While developmental robotics has made a number of advances in recent years (e.g., Asada et al., 2009; Breazeal & Scassellati, 2002; Cangelosi & Schlesinger, 2015; Lungarella, Metta, Pfeifer, & Sandini, 2003; Minato, Thomas, Yoshikawa, & Ishiguro, 2010; Scassellati et al., 2006; Silva, Correia, & Lyhne Christensen, 2017), none of these advances can compare to the mastery of cultural behavior and the understanding of self and others as persons that humans acquire during development. Some success has occurred for earlier stages of development. Imitation by robots of humans and other robots has been studied, and robots are able to match their own actions to the actions of others, even beings with different morphological properties (Alissandrakis, Nehaniv, & Dautenhahn, 2003; Breazeal & Scassellati, 2002; Dautenhahn & Nehaniv, 2002; Kaipa, Bongard, & Meltzoff, 2010; Nehaniv & Dautenhahn, 2002). This provides a means for the development of agent-neutral representations of actions (e.g., Breazeal, Buchsbaum, Gray, Gatenby, & Blumberg, 2005). Making use of such agent-neutral representations acquired through observational learning of a human actor performing sequences of object-directed actions, Dominey and Warneken (2011) demonstrated that collaboration between a robotic arm with a gripping hand and a human agent was possible. The robotic hand ‘understood’ the common goal and sequence of actions to be achieved by gripping objects and moving them to specific locations, and could obey commands, or play either of two cooperative roles, or both of them, when required. While impressive as early stages of robotic social development, typical of the 2-year-old child, moving from basic actions and ‘shared intentions’ in constrained situations like these toward understanding a wider range of intentional relations of various types and then combining them into scripts and normative schemas is more challenging.