Explore chapters and articles related to this topic
Intelligent Agents in Telecommunication Networks
Published in Witold Pedrycz, Athanasios Vasilakos, Computational Intelligence in Telecommunications Networks, 2018
Costas Tsatsoulis, Leen-Kiat Soh
Intelligence can be defined as the ability to learn, the ability to adapt, the ability to improve one’s performance over time, the ability to reason and make decisions, the ability to act without having to be instructed, the ability to plan and execute a complicated task that involves collaborations from other agents or human users, the ability to traverse a network and perform assigned tasks at each planned stop, and so on. We see an intelligent agent as a piece of program that is mobile, autonomous, reactive, and communicative. Of course, other attributes such as those mentioned above can be structured into the behavior of the program code to increase its intelligence. In our opinion, intelligent agents come in a spectrum of various degrees of intelligence, just like actual life forms in our world that range from single-purpose, single-cell life forms to extremely complex life forms like humans.
Intelligent Agent Technology
Published in Jay Liebowitz, The Handbook of Applied Expert Systems, 2019
David Prerau, Mark R. Adler, Dhiraj K. Pathak, Alan Gunderson
Among the issues that will become increasingly important as the use of intelligent agents becomes more widespread are the issues of privacy and security of derived agent information, and the impartiality of the agent system. An intelligent agent is given and infers information about the likes and preferences of a user in professional and personal areas. This information can be of great interest to marketers, business competitors, personal acquaintances, and many others. Many users will not want this information to be available to such people, and intelligent agent systems must insure that promised privacy is maintained.
Supplier Selection for Protective Relays of Power Transmission Network with the Fuzzy Approach
Published in Sachin K. Mangla, Sunil Luthra, Suresh Kumar Jakhar, Anil Kumar, Nripendra P. Rana, Sustainable Procurement in Supply Chain Operations, 2019
The most popular and most known MCDM techniques in this regard include the analytic hierarchy process (AHP) (Saaty, 2001) and analytical network process (ANP), which are recommended by many researchers (Liou et al., 2014). Both of these methods use paired comparison and judgment of the experts for estimating the evaluation criteria. Since discrete scales are not able to represent human thinking process, which is based on inaccuracy and unreliability, AHP method is considered classic and, generally, MCDM methods cannot represent the significance of qualitative criterion accurately. For this reason, fuzzy AHP (FAHP) was first introduced by Laarhoven and Pedrycz (1983). It is another format of classic AHP that transforms linguistic judgments (qualitative) into fuzzy values in order to create fuzzy paired comparison matrixes. These matrixes estimate relative weight of criteria and, consequently, provide a ranking for the existing options (Calabrese et al., 2013). Since then, several FAHP methods have been presented with the aim of solving some problems (Buckley, 1985; Chang, 1996; Herrera-Viedma et al., 2004; Kahraman et al., 2004; Zeng et al., 2007). Some authors used other methods for selecting the best supplier such as ANP (Lin et al., 2010), fuzzy ANP (Vinodh et al., 2011), fuzzy ELECTRE (Montazer et al., 2009; Sevkli, 2010), fuzzy PROMETHEE (Chai et al., 2012; Chen et al., 2011), TOPSIS (Saen, 2010), fuzzy TOPSIS (Wang et al., 2009), fuzzy VIKOR (Wu et al., 2009), fuzzy DEMATEL (Chang et al., 2011), fuzzy SMART (Chou & Chang, 2008), Gray Theory (Golmohammadi & Mellat-Parast, 2012), QFD (Ansari & Batoul, 2006), and fuzzy QFD (Bevilacqua et al., 2006; Lima-Junior & Carpinetti, 2016). Data covering analysis (DEA) is widely used in different investigations for selecting the suitable supplier (Azadeh & Alem, 2010; Azadi et al., 2015; Falagario et al., 2012; Wu & Blackhurst, 2009). Cooper et al. (2007) believe that when we use DEA, the number of alternatives should be three times more than the number of inputs and outputs (criteria). Therefore, the existing criteria and alternatives are limited. Except DEA, linear planning (Lin, 2012; Ozkok & Tiryaki, 2011; Ustun & Demi, 2008; Wang & Yang, 2009), nonlinear planning (Hsu et al., 2010; Rezaei & Davoodi, 2012; Yeh & Chuang, 2011), multi-objective planning (MOP) (Amin & Zhang, 2012; Feng et al., 2011; Haleh & Hamidi, 2011; Shankar et al., 2013; Shaw et al., 2012; Tsai & Hung, 2009; Yu et al., 2012), and random programming (Kara, 2011; Li & Zabinsky, 2011) are other important MP methods that are utilized for the selecting the best supplier. Artificial intelligence (AI) is used for studying and designing intelligent agents. An intelligent agent is a system that understands its surroundings and maximizes its success by some special actions (Russell et al., 2003). Genetic algorithm (GA) (Sadeghieh et al., 2012), artificial neural network (ANN) (GüNeri et al., 2011), Rough Set Theory (Bai & Sarkis, 2010), and GREY Theory (Li et al., 2007) are the main applied AI methods found through the literature review.
Effects of Language on Angry drivers’ Situation Awareness, Driving Performance, and Subjective Perception in Level 3 Automated Vehicles
Published in International Journal of Human–Computer Interaction, 2023
Sushmethaa Muhundan, Myounghoon Jeon
An intelligent agent is a robust and flexible computing system that is autonomous and situated in an environment to act intelligently when performing user-given tasks (Glavic, 2006). An in-vehicle agent is an intelligent agent in the context of driving (Biondi et al., 2019). In-vehicle agents can be utilized to balance the emotional state of the driver. Studies show that the presence of in-vehicle agents and their social interaction with drivers have a beneficial effect on the trust levels of drivers (Kraus et al., 2016). Emotion detection and agent interventions can positively impact the drivers’ safety and situation awareness (Cramer et al., 2008; Williams et al., 2014). Until level 3 automated vehicles, manual driving is required to varying extents depending on the context. The current study focuses on this stage and explores how in-vehicle agent interactions impact drivers. Specifically, the study focuses on understanding the influence of an in-vehicle agent’s language and the presence and absence of an in-vehicle agent on driving performance, situation awareness, and subjective perception in the context of level 3 automated vehicles.
Crowd evacuation simulation model with soft computing optimization techniques: a systematic literature review
Published in Journal of Management Analytics, 2021
Hamizan Sharbini, Roselina Sallehuddin, Habibollah Haron
Zainuddin and Aik (2012) used ANN-based cellular automaton model for decision-making ability of the pedestrians and simulate an exit-selection phenomenon. Agents in the simulation are artificially aided with autonomous to find their target exits. Intelligent agents have the capability to adapt their behaviour by learning from the environment as well by interacting with other agents. Experimental results reveal that their method is a relationship between crowd density and the choice of selection of exits. Sharma, Otunba, Ogunlana, and Tripathy (2012) work on creating a prediction framework in conjunction with genetic algorithms to simulate the intelligent agents’ evacuation. The accuracy of prediction is 86% which is gained by developed intelligent model. Whilst Yuen, Lee, and Lam (2014) have established a work based on ANN on mimicking human decision of route choice. All the works with ANN-based model however still being investigated for its feasibility under anticipated fire scenarios. The performance analysis of the approach in terms of standard metrics and a comparative study of the proposed approach with other approaches is still not evident.
Seven HCI Grand Challenges
Published in International Journal of Human–Computer Interaction, 2019
Constantine Stephanidis, Gavriel Salvendy, Margherita Antona, Jessie Y. C. Chen, Jianming Dong, Vincent G. Duffy, Xiaowen Fang, Cali Fidopiastis, Gino Fragomeni, Limin Paul Fu, Yinni Guo, Don Harris, Andri Ioannou, Kyeong-ah (Kate) Jeong, Shin’ichi Konomi, Heidi Krömker, Masaaki Kurosu, James R. Lewis, Aaron Marcus, Gabriele Meiselwitz, Abbas Moallem, Hirohiko Mori, Fiona Fui-Hoon Nah, Stavroula Ntoa, Pei-Luen Patrick Rau, Dylan Schmorrow, Keng Siau, Norbert Streitz, Wentao Wang, Sakae Yamamoto, Panayiotis Zaphiris, Jia Zhou
As autonomous intelligent agents will make increasingly complex and important ethical decisions, humans will need to know that their decisions are trustworthy and ethically justified (Alaieri & Vellino, 2016). Therefore, transparency is a requirement (see also Section 2.2.1), so that humans can understand, predict, and appropriately trust AI, whether it is manifested as traceability, verifiability, non-deception, or intelligibility (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2019). Intelligible AI, in particular, will further help humans identify AI mistakes and will also facilitate meaningful human control (Weld & Bansal,, 2019). Nevertheless, depending on how the explanations are used, a balance needs to be achieved in the level of details, because full transparency may be too overwhelming in certain cases, while not enough transparency may jeopardize human trust in AI (Chen et al., 2018; Yu et al., 2018). At the same time, system transparency and knowing that AI decisions follow ethics will influence human-AI interaction dynamics, giving the opportunity to some people to adapt their behaviors in order to render AI systems unable to achieve their design objectives (Yu et al., 2018).