Explore chapters and articles related to this topic
An Overview of Explainable Artificial Intelligence (XAI) from a Modern Perspective
Published in Mohamed Lahby, Utku Kose, Akash Kumar Bhoi, Explainable Artificial Intelligence for Smart Cities, 2021
Ana Carolina Borges Monteiro, Reinaldo Padilha França, Rangel Arthur, Yuzo Iano
Explainable Artificial Intelligence (XAI) is artificial intelligence programmed to explain its own purpose, and rationalization of the decision process, in a way that can be understood by the average user. Either way, XAI offers important information about how an Artificial Intelligence program makes decisions, by listing the program’s strengths and weaknesses; the specific criteria used by the program to achieve a result; the appropriate level of confidence for different types of decisions; the reason why a decision was made, at the expense of other options; what errors the intelligent system in question is vulnerable to; and even how these errors can be corrected (Longo et al., 2020).
Explainable Artificial Intelligence Improves Human Decision-Making: Results from a Mushroom Picking Experiment at a Public Art Festival
Published in International Journal of Human–Computer Interaction, 2023
Benedikt Leichtmann, Andreas Hinterreiter, Christina Humer, Marc Streit, Martina Mara
Explainable Artificial Intelligence (XAI) is a promising line of research with respect to trust calibration, as it enables explaining the decision of an AI system to users (see, e.g., Ehsan et al., 2021; Long & Magerko, 2020; Miller, 2019). While user research in the domain of XAI is growing, there is also a high demand for human-centered design and the analysis of user behavior is crucial in this endeavor (see e.g., Ehsan et al., 2021, Ehsan et al., 2022). Most user studies are conducted as online experiments with high variance of decision tasks and thus varying relevance to users and varying vulnerability (as discussed in Leichtmann et al., 2023). However, the vulnerability of the task is an important pre-requisite in trust research (e.g., Hannibal et al., 2021), and the closeness of the situation to real-world human–AI interaction is favorable for the ecological validity of the research results (Kenny et al., 2021; Zhang et al., 2020). Thus, XAI research is needed that (i) evaluates XAI methods in user studies with human-centered perspectives, and (ii) are based on use cases that put humans in a vulnerable position, and (iii) that are closer to real-world situations.
Interpretable Models for the Potentially Harmful Content in Video Games Based on Game Rating Predictions
Published in Applied Artificial Intelligence, 2022
Explainable artificial intelligence (XAI) is a relatively new technique that explains the underlying processes in ML models in a way that humans can understand (Barredo Arrieta et al. 2020). Various studies have started to take advantage of this technique. In experimental studies, Parsa et al. (Parsa et al. 2020) leveraged the XAI technique to explain the occurrence of traffic accidents using several types of real-time data, including traffic, network, demographic, land use, and weather features. Chakraborty et al. (Chakraborty, Başağaoğlu, and Winterle 2021) employed the XAI technique to explain the inflection points in the climate predictors of hydro-climatological data sets. The XAI technique has also been utilized in the medical field. For example, it has delineated the area of tumor tissue in patches extracted from histological images (Palatnik de, Rebuzzi Vellasco, and Da Silva 2019) and explained the occurrence of Parkinson’s disease in a public data set of 642 brain images of Parkinson’s patients (Magesh, Myloth, and Tom 2020). Although previous studies have demonstrated the promise of the XAI technique regarding interpretability analysis, no study has used it to examine video games. Current research studies attempt to obtain metrics with the highest prediction accuracy (Alomari et al. 2019) but lack a thorough analysis of the harmful content in video games. The fact that no game studies focus on explainability has also been raised in previous review studies (Barredo Arrieta et al. 2020; Tjoa and Guan 2020). Our study addresses these research omissions identified in previous experimental and review papers.
Model predictive lighting control for a factory building using a deep deterministic policy gradient
Published in Journal of Building Performance Simulation, 2022
Young Sub Kim, Han Sol Shin, Cheol Soo Park
The results confirm that an illuminance prediction model that can perform reliable real-time estimates can be a useful substitute for photosensor-based closed loop controls. In addition, DDPG control optimization demonstrates high energy-saving potential for electric lighting dimming control. The following further studies are currently on-ongoing: converting DDPG control into a set of minimalistic rules so that it can be applied to more generic cases in the industry.Applying explainable artificial Intelligence (XAI) using classification algorithm or decision tree. This will help facility managers for their daily practice.