Explore chapters and articles related to this topic
Super intelligent robots and other predictions
Published in Arkapravo Bhaumik, From AI to Robotics, 2018
This apocalyptic future, where technological intelligence is a few million times that of the average human intelligence and technological progress is so fast that it is difficult to keep track of it, is known as ‘technological singularity’ or ‘singularity’. AI scientists also relate this event to the coming of super intelligence [43], artificial entities which have cognitive abilities a million times richer in intellect and are stupendously faster than the processing of the human brain. The irony is such that nowadays the monikers of Terminator and Skynet [318], as shown in Figure 10.1, are quickly married into research and innovation in AI [185] and robotics [66], such as Google Cars [303], robotic cooks [295] and waiters [319], the OpenWorm project [216] etc. and has consequently led to fear mongering [255,306,355] and drafting of guidelines [24,224,274], rules [364] and laws [60,342,348] to tackle this apocalypse of the future. These edicts attempt to restore human superiority by either reducing robots to mere artifacts and machines or tries to make a moral call to the AI scientist, insisting on awareness of the consequences. Therefore, advancing AI clearly sets the proverbial cat among the pigeons. Other than the media, science fiction is replete with such futuristic scenarios. Čapek’s iconic play in the 1920s, R.U.R — Rossum’s Universal Robots, which gave us the word ‘Robot’ ends with the death of the last human being and a world dominated by robots with feelings such as endearment, love and attachment. Other iconic tales of robocalypse and dystopia are, HAL set in 2001, Blade Runner in 2019, I, Robot in 2035, Terminator set in 2029, while Wall-E is set 800 years in the future in 2805. All of these provide examples of a futuristic human-robot society, and while nearly all of them are unsettling, all of them at the very least confirm a proliferation of AI and robots both in the near and far off future. It is interesting to note that in more academic concerns, Toda’s fungus eaters are tagged to a sell date of 2061.
Intelligence in cyberspace: the road to cyber singularity
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2021
Ishaani Priyadarshini, Chase Cotton
The concept of technological singularity is not new. The term was coined back in 1993 when Vernor Vinge presented the underlying idea of creating intelligence (Vinge, 1993). Technological Singularity may be defined as a situation in which it is believed that artificial intelligence would be capable of self-improvement or building smarter and more powerful machines than itself to ultimately surpass human control or understanding (Nicolescu, 2017). The concept primarily refers to a situation where ordinary human intelligence is enhanced or overtaken by artificial intelligence. Vinge describes there several ways to attain technological singularity Computers that are aware and superhumanly intelligent may be developed.Large computer networks (and their associated users, both humans and programs) may wake up as superhumanly intelligent entities.Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.Biological science may provide a means to improve natural human intellect.
Radical systems thinking and the future role of computational modelling in ergonomics: an exploration of agent-based modelling
Published in Ergonomics, 2020
Matt Holman, Guy Walker, Terry Lansdown, Adam Hulme
The technological singularity is the hypothesis that the unleashing of AI systems with superhuman intelligence will set off a cascade of exponential technological growth, resulting in irreversible changes to human civilisation (Muehlhauser and Salamon 2012). The singularity hypothesis posits that a superhuman AI system would self-amplify its power and functional capabilities. This positive feedback loop (i.e. more computing power means more rapid self-optimisation and so on) would scale super linearly and could be barely perceptible. It is argued that a technological singularity could bring forth bottomless virtue, or it could be an extinction scenario (Vinge 1993; Kurzweil 2014; Bostrom 2003; Goertzel 2012; Hancock 2017).