Explore chapters and articles related to this topic
Software and Technology Standards as Tools
Published in Jim Goodell, Janet Kolodner, Learning Engineering Toolkit, 2023
Jim Goodell, Andrew J. Hampton, Richard Tong, Sae Schatz
Intelligence augmentation is the concept of humans and artificially intelligent software agents working together to perform some task. That task could be helping the human learn something or otherwise augmenting the human’s skills to accomplish tasks more effectively than either the person or AI agent could do on their own. It’s likely that more and more of the work that humans do will be augmented by intelligent software agents. Time spent learning versus time spent doing will increasingly overlap. People will learn while doing, and the lines will blur between task automation and learning engineering. There could be a growing role for learning engineering to create and optimize learning experiences within the context of learning while doing with intelligence augmentation playing the roles of teacher, coach, mentor, peer, and assistant.★
Intelligent Agents
Published in Satya Prakash Yadav, Dharmendra Prasad Mahato, Nguyen Thi Dieu Linh, Distributed Artificial Intelligence, 2020
Rashi Agarwal, Supriya Khaitan, Shashank Sahu
Evolving software systems are designed and developed using software agents. A software agent is a component of software that has the capability of performing the tasks for another entity which can be software, hardware, or a human entity or user. Agents reflect autonomy and intelligence in their behavior. However, an agent in its basic form may perform pre-defined tasks such as data collection and transmission leaving little room for autonomy and intelligence. Another dimension of agent-hood is sociality. Sociality refers to interaction and collaboration with other agents and non-agent entities. Aoftware evolving agent is a software program that has the capability to learn changes that occur in the requirements itself to fulfill user needs. Evolving agent (also called an intelligent agent) is gaining widespread applications in trading markets (Sueyoshi & Tadiparthi, 2007). An agent is a software component that perceives the outside world/environment and acts accordingly. Software developed with the help of agents is adaptive and able to adjust themselves according to changes in the environment. Software developed using agents provides a new way for the development of programs. This way of programming is for open environments such as distributed domains; electronic commerce and web-based systems. Agents have the ability to learn the requirements of users autonomously and able to cooperate with other agents to fulfill the requirements of the user.
Distributed Artificial Intelligence and Agents
Published in Weiming Shen, Douglas H. Norrie, Jean-Paul A. Barthès, Multi-Agent Systems for Concurrent Intelligent Design and Manufacturing, 2019
Weiming Shen, Douglas H. Norrie, Jean-Paul A. Barthes
For authors like Nwana (1996): “Software agents have evolved from multi-agent systems (MAS), which in turn form one of the three broad areas which fall under DAI, the other two being Distributed Problem Solving and parallel AI. Hence, as with multi-agent systems, they inherit many of DAI’s motivations, goals and potential benefits.” For Brenner et al. (1998) agents can be classified as follows: “At the highest level, three categories of agents can be distinguished: human agents, hardware agents and software agents. [...] Intelligent software agents are defined as being a software program that can perform specific tasks for a user and possesses a degree of intelligence that permits it to perform parts of its tasks autonomously and to interact with its environment in a useful manner.” For Jennings and Wooldridge (1998), “an intelligent agent is a computer system that is capable of flexible autonomous action in order to meet its design objectives. By flexible we mean that the system must be responsive [... ] proactive [...], and social [...].”
Real-time multi-agent fleet management strategy for autonomous underground mines vehicles
Published in International Journal of Mining, Reclamation and Environment, 2023
M. Gamache, G. Basilico, J.-M. Frayret, D. Riopel
Therefore, to achieve these objectives, we designed a multi-agent system for managing the fleet of vehicles in an underground mine (MA-FMS). A multi-agent system is usually described with three fundamental components: a set of roles and responsibilities (assigned of one or many software agents), a set of behaviours (associated to each role), and a set of interaction mechanisms describing how agents interact in specific situations. In other words, a multi-agent system is a distributed software system made of autonomous, reactive, and social/interactive software agents, capable of sensing their environment and reacting without the intervention of a human user, but also capable of working together to achieve some goals. Software agent can also be proactive and plan their own actions to maximise some utility function or achieve some goals (e.g. reach a particular state). The challenge in designing such a distributed system is to make sure the overall system’s performance and emerging behaviour achieve the design objectives.
RADAR: automated task planning for proactive decision support
Published in Human–Computer Interaction, 2020
Sachin Grover, Sailik Sengupta, Tathagata Chakraborti, Aditya Prasad Mishra, Subbarao Kambhampati
Human-Computer Interaction (HCI) is thought to have developed as a sub-field in three different areas – management information systems, computer science, and human factors (Grudin, 2011). While human factors have evolved to understand the behavioral effects of agents on different interfaces, management information systems, and computer science worked on various ways of designing these interactions. In the past, two of the most common methods of interactions were – direct manipulation and interface agents (Shneiderman & Maes, 1997). While direct manipulation occurs when the interface changes only based on the user’s instructions, interface agents are assumed to possess more intelligence and adapt by themselves, behaving like a collaborator (Maes, Shneiderman, & Miller, 1997). For example, software for classifying news or e-mail as relevant or not, in the context of a specific user, can be thought of as an intelligent software agent. In this paper, we look at a specific intelligent software agent that provides decision support to a user and reduces their cognitive and information overload for sequential decision-making tasks.
(AIAM2019) Artificial Intelligence in Software Engineering and inverse: Review
Published in International Journal of Computer Integrated Manufacturing, 2020
Mohammad Shehab, Laith Abualigah, Muath Ibrahim Jarrah, Osama Ahmad Alomari, Mohammad Sh. Daoud
Wooldridge (1997) intensively studied agent-based SE. He stated that ”agents are simply software components that must be designed and implemented in much the same way that other software components are. ” Software agents are encapsulated entities situated in a certain environment, aiming to achieve and meet their needs and design objectives and have superior flexibility and autonomy in that environment. The author highlighted the issues of building a software with respect to multi-agent-based system. A roadmap was set out in agent-based SE, where the considered fundamental issues of agent-based system were specification, implementation/refinement, and verification (including testing and debugging). The article discussed that software agent should exhibit some principle characteristics, namely, reactive and proactive social behaviors. Thus, an agent should have the following key properties: (1) Autonomy: Agents are identifiable entities. They decide without any external intervention from other systems or humans. (2) Reactivity: Agents are embedded in a certain environment (such as a collection of other agents, physical world, the Internet, a user via a graphical user interface, or perhaps many of these combined). They can perceive this environment (at least to some extent) and react to changes in it. (3) Pro-activeness: Agents do not simply react to changes in the environment. They also exhibit goal-directed behavior by taking the initiative. (4) Social ability: Agents can cooperate with one another and engage in social activities to fulfill their design objectives. Therefore, an agent-embedded model is user-friendly, intuitive, adaptive, and flexible.