Explore chapters and articles related to this topic
Robots and Robot Capabilities
Published in Aimee Van Wynsberghe, Healthcare Robots, 2016
Robot learning may be used to refer to a feature of a robot – the robot can adapt by changing its behaviour based on its previous experience (Franklin, 1997) – or, to the way in which the robot is programmed – learning by demonstration (Friedrich, 1996; Billard, 2008), mimicking (Mayer, 2010), or reinforcement (Billard, 2008; Santoro, 2008). The concept of robot learning invariably increases the degree of autonomy the robot has and increases the success with which the robot will manoeuvre in a new, unknown environment. Of course this way of acting and interacting inevitably invites the concern that a robot is then free to choose a certain course of action. One may wonder how we can ever predict what the robot will do? And if we can’t predict what the robot will do how can we ever ensure that it is safe?
Application of Artificial Intelligence Algorithms for Robot Development
Published in S. S. Nandhini, M. Karthiga, S. B. Goyal, Computational Intelligence in Robotics and Automation, 2023
R. M. Tharsanee, R. S. Soundariya, A. Saran Kumar, V. Praveen
The integration of AI and Robotics has led to the development of new type of learning systems, probably known as robotic learning. There are five significant areas (see Figure 4.7) in Robotics where Machine Learning has had a substantial influence on industrial Robotics. Computer vision allows machines to identify and organize objects based on different properties like movements, shape, size and color. Anomaly detection with neural networks and convolution neural networks is an excellent example for computer vision technologies. Imitation learning, the category of reinforcement leaning, aims to mimic human behavior, particularly toddlers and infants, in solving a given task. The machine is trained to execute a task from demonstrations by learning a plotting amid observations and actions that make the robot to act in the suitable environment by improving the rewards. Self-supervised learning methods allow robots to engender their own training samples to increase the testing results in terms of performance, using a pre-trained test train data taken close to infer long-range abstruse sensor data where it is combined into robots and ophthalmic devices to discover items. Autonomous vehicles that involve the use of unsupervised and reinforcement learning algorithms are considered to be the variant of self-supervised learning. Assistive technologies are exactly suitable for healthcare sectors in numerous use cases like disease diagnosis and elderly healthcare. Multi-agent learning is another class of reinforcement learning that allows several agents to interact in a shared environment. The multiple agents within the common environment are allowed to interact with one another and execute tasks by collectively learning, observing and co-coordinating the multiple outcomes [20].
Towards Knowledge Sharing Oriented Adaptive Control
Published in Cybernetics and Systems, 2022
Guixian Li, Yufeng Xu, Haoxi Zhang, Edward Szczerbicki
In order to reduce the costs of the learning process, extensive efforts have been made (Dai et al. 2008; Taylor and Stone 2009). Approaches such as learning invariant features (Gupta et al. 2017) and manifold alignment (Ammar et al. 2015; Daftry, Bagnell, and Hebert 2016) are introduced. These methods aim to improve the efficiency of robot learning by sharing knowledge among robots. Additionally, transferred knowledge can accelerate the training of target robots and help improve the target robot's performance in untrained tasks (Taylor and Stone 2009). Knowledge sharing or migration learning problems have been studied in various fields (Bocsi et al., 2013). Knowledge sharing in robotics is divided into two directions: (i) transfers across robots and (ii) transfer across tasks. The former is about transferring the collected knowledge to another robot (Devin et al. 2017; Gupta et al. 2017; Pereida, Helwa, and Schoellig 2018). In contrast, the latter focuses on transferring knowledge learned from an old task to a new task on the same robot (Cavallo et al. 2014; Wang, Song, and Zhang 2008). Nevertheless, existing approaches almost always encounter the problem that the target robot does not adapt well to the source robot's knowledge in practice (Adlakha and Zheng 2020; Kim et al. 2020; Taylor and Stone 2009). Therefore, it remains challenging to enable the target robot to adapt to other robots' inverse dynamics. This paper introduces a novel approach to address the adaption problem in sharing knowledge between robots with different dynamics.
Development of writing task recombination technology based on DMP segmentation via verbal command for Baxter robot
Published in Systems Science & Control Engineering, 2018
Chunxu Li, Chenguang Yang, Andy Annamalai, Qingsong Xu, Shaoxiang Li
Recently, content-based retrieval methods have gained significance in motion-captured data retrieval (Lew, Sebe, Djeraba, & Jain, 2006). In the data matching process, the start frame and the end frame of the search condition sequence are first indexed into the library to select possible alternative segments in the motion capture database, and finally a dynamic time warping (DTW) method is used to calculate the similarity to determine the final search result (Yao, Wen, & Lu, 2015). With the continuous development of robot research, its movement behaviour is more complex, and requires the robot's learning ability to be much greater. At the same time, the traditional algorithms are difficult to achieve for complex movements that are not easily obtained by movement laws, such as hitting the ball and a writing task (Yao et al., 2015). Furthermore, the robot requires the ability to learn to enhance its intelligence, so that it can achieve self-compensation correction, and has the ability (to interact with the random dynamic environment to deal with sudden and unknown situations) (Yao et al., 2015). The main advantage of robot learning is that it can find effective control strategies to complete complex motion tasks when traditional methods are rather challenging.