Explore chapters and articles related to this topic
Machine learning and the economy
Published in Siddhartha Mitra, Robotization and Economic Development, 2023
Note that labelling is a type of prediction – it predicts on the basis of an image the name humans attach to the image – but there can be many types of predictions that supervised machine learning can generate. Consider, for example, the amount of soft drink consumed in a city at a given temperature. The computer system is trained on a data set which gives amounts of soft drinks consumed at various temperatures. The computer system can then, for example, try to fit a linear relationship to the scatter plot which captures the data set and determine the error associated with each data point by noting the difference between soft drink consumption predicted by the linear relationship at the given temperature and the consumption level actually recorded. An algorithm can be used to add functions of the absolute values of these errors to generate a “sum” – for example, squares of errors can be added or just the absolute values can be added. The computer system experiments with the slope and position of the line representing the relationship between temperature and soft drink consumption until the mentioned “sum” is minimized. This relationship is used to make predictions. Note that under machine learning, the computer system is equipped with tools such as algorithms to make predictions but to a large extent it exercises some autonomy in making these predictions.
Terminology and concepts
Published in Henrik Ringbom, Erik Røsæg, Trond Solvang, Autonomous Ships and the Law, 2020
By contrast to the other two elements of ship automation – technical capability and manning – the level of autonomy is significantly less static and may change numerous times during a single voyage. The choice of the level of autonomy depends on the two other elements, but also on more operational considerations, such as the complexity of the task to be performed, the probability of risks, and the severity of the potential consequences of a technical failure. When a ship is navigating in the middle of an ocean, the level of autonomy may be higher as there are probably not that many other vessels or objects that should be avoided; in a narrow and congested fairway, in turn, more active human presence is most probably needed.17 From a legal point of view, however, it is important to note that certain legal tensions with the existing rules emerge immediately when, e.g., the bridge is left unattended. The percentage of time that the vessel operates without human monitoring matters less in this respect.
The impact of robotics and autonomous systems (RAS) across the conflict spectrum
Published in Ash Rossiter, Robotics, Autonomous Systems and Contemporary International Security, 2020
Simply put, autonomy is the ability of a machine to perform a task without human input. Under this definition, an autonomous system is a machine, whether hardware or software, that once activated performs some task or function on its own.12 Autonomy can be described as the cognitive engine that powers robots. There are some existing autonomous weapon systems, such as the radar-guided and computer-controlled Phalanx gun, a close-in weapon system (CIWS) for defense against airborne threats such as anti-ship missiles. These have autonomous settings but follow pre-programmed actions within tightly set parameters in highly controlled environments.
Moral and social ramifications of autonomous vehicles: a qualitative study of the perceptions of professional drivers
Published in Behaviour & Information Technology, 2023
Veljko Dubljević, Sean Douglas, Jovan Milojevich, Nirav Ajmeri, William A. Bauer, George List, Munindar P. Singh
As these rapid changes in AV technology occur, urgent moral questions press researchers and policymakers alike to develop a comprehensive approach to the study of ethics in artificial intelligence (AI) systems, including AVs (Taddeo and Floridi 2018; Winfield and Jirotka 2018). AI systems—including AVs, medical bots, and automated trading systems—shape socioeconomic structures and affect the lives of many citizens (Ford 2015; Frank et al. 2019; Lyons et al. 2021). These AIs influence public safety, particularly with AVs and automated mass transportation systems, as illustrated by the automation problems created by the Boeing 737 Max aeroplane that caused crashes in 2018 and 2019 (Gelles 2019). Although AI systems have the potential to save lives, they also raise important new safety and ethical concerns, including the way AI systems deal with (human) autonomy and dignity, justice and equity, and data protection and privacy, among other issues (EGE 2018).
Cohesion in human–autonomy teams: an approach for future research
Published in Theoretical Issues in Ergonomics Science, 2022
Shan G. Lakhmani, Catherine Neubauer, Andrea Krausman, Sean M. Fitzhugh, Samantha K. Berg, Julia L. Wright, Ericka Rovira, Jordan J. Blackman, Kristin E. Schaefer
Autonomy, defined in the broadest terms, is a technology that performs a function without any human intervention. However, it has been argued that there are different types of autonomy or even levels of automation providing an understanding of human–autonomy interaction (Parasuraman et al., 2000). In effective human–autonomy teams, the autonomy and the human’s strengths support each other’s weaknesses (see Fitts’ 1951) ‘Men Are Better At, Machines Are Better At’ (MABA-MABA) list—now known as ‘Humans Are Better At, Machines are Better At’ (HABA-MABA)). For example, humans are better at creating alternate solutions, handling unexpected events, changing roles or tasks frequently and aggregating spatial and temporal information (i.e. perceived information) into a meaningful ‘whole’. Machines (e.g. automation) are better at processing and analysing large amounts of data to determine potential outcomes quickly, repetitive tasks and monitoring over an extended period. However, with machines having a wider range of capabilities, function allocation is not as clearly delineated as it once was. Today, technology is advanced enough that focusing solely on human needs, a common paradigm in human–automation interaction research, may not be sufficient.
Comparative life cycle assessment and costing of an autonomous lawn mowing system with human-operated alternatives: implication for sustainable design improvements
Published in International Journal of Sustainable Engineering, 2021
Michael Saidani, Zhonghao Pan, Harrison Kim, Jason Wattonville, Andrew Greenlee, Troy Shannon, Bernard Yannou, Yann Leroy, François Cluzel
Automation and autonomous solutions are increasingly considered as promising and timely solutions to enhance the safety, reliability, and productivity of human-operated tasks. Automation is a set of human-defined functions performed by a robot or piece of equipment. Autonomy is a state in which a robot or piece of equipment operates independently, without explicit instructions from a human. For the Society of Automotive Engineers (SAE International 2016), there are five levels of autonomy from Level 0 for the human driver doing everything (all manned vehicles), to Level 4 being an automated system that can perform all driving tasks (no local supervision, remote supervision or artificial intelligence) under all conditions that a human driver could perform, through intermediate levels (manned back-up, or in-field supervision of unmanned vehicles). A large number of industries are implementing state-of-the-art automated systems, such as: automotive industry, agriculture industry, aerospace industry, defence industry, mining industry, energy industry, or food industry (Productivity Inc 2019). A review of existing autonomous systems in several industries has emphasised that the sustainability and related environmental impact of such autonomous systems are barely studied and quantified in comparison to conventional human-operated systems, except in the automotive industry with an increased focus on connected and autonomous vehicles. The potential promising benefits of automation – which still need to be validated quantitatively with sound and transparent studies – include: (i) a faster return on investment, a priori due to lower operating costs, reduced lead times, and increased output; and (ii), a smaller environmental footprint, by streamlining equipment and processes, reducing scrap and using less energy. Despite promises of increased efficiency (Kurilova-Palisaitiene et al. 2017; Bahri and Ouled Amor 2019), it is not clear whether the paradigm shift towards autonomous systems will change ‘how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment)’ (De Melo, Marsella, and Gratch 2019). In fact, Nouzil et al. (2017) reviewed the sustainable impacts of automation on society and qualitatively discussed how automation could affect our society, by considering four dimensions: ecology, economics, politics, and culture. When it comes to the design and development of autonomous technical solutions, it has been noticed that most studies do not comprehensively consider the societal, environmental, and economic impacts of automation. Particularly, a lack of life cycle analysis on automation processes and autonomous systems has been observed. Further research is therefore needed to study the environmental effect of automation technologies through life cycle assessment to better understand their ecological footprint.