Explore chapters and articles related to this topic
Artificial Intelligence in Concrete Materials
Published in M.Z. Naser, Leveraging Artificial Intelligence in Engineering, Management, and Safety of Infrastructure, 2023
Zhanzhao Li, Aleksandra Radlińska
Many AI models have been treated as “black boxes” in concrete research, especially for neural networks that often provide accurate predictions at the cost of high model complexity. One challenge that hinders their systematic adoption in the construction industry is the lack of interpretability in AI models (Naser, 2021). Limited research efforts have been devoted to unravel how or why a model predicts the way it does (i.e., the cause-and-effect relationship). Besides building an accurate model, it is crucial to gain insights and knowledge from the model in the concrete domain. Testable hypotheses resulting from interpretable models can be cross-validated by proposed experiments. Interpretability has been an active area of research in computer science (Lipton, 2018; Molnar, 2019; Molnar et al., 2020; Montavon et al., 2017). Moving forward, AI models that offer scientific understanding will become increasingly desirable in the concrete research field.
Ethical rules
Published in Vahap Tecim, Sezer Bozkus Kahyaoglu, Artificial Intelligence Perspective for Smart Cities, 2023
According to Perrault et al. (2019) research of documents on AI ethics created in 2016–2019 by various institutions and societies, it has been found that fairness, interpretability and explainability, transparency, accountability, data privacy and security were among the main ethical considerations in machine learning models. European Commission (2019a) also mentioned that in order for AI to be trustworthy; it should meet the seven requirements: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being (7) accountability. Figure 11.1 shows some of the significant ethical considerations of AI in smart cities highlighted in the literature (Ahmad et al., 2021; Yigitcanlar et al., 2020; Clever et al., 2018), which are summarised as follows.
Population-Specific and Personalized (PSP) Models of Human Behavior for Leveraging Smart and Connected Data
Published in Kuan-Ching Li, Beniamino DiMartino, Laurence T. Yang, Qingchen Zhang, Smart Data, 2019
Theodora Chaspari, Adela C. Timmons, Gayla Margolin
Interpretable computational models can help users understand the inner mechanisms of the algorithms and find explanations regarding the predicted outcomes, therefore increasing their trust and confidence in the decisions of the automated systems. However, the interpretability of a model is highly dependent on its complexity, i.e., the more complex a system, the less interpretable it is. While there has been a significant amount of research assessing the interpretability of PSP systems for understanding diversity in human behavior [27, 20, 21, 17], more work is needed to maximize its usefulness for the life sciences. Such findings could shed light into unexplored facets of human behavior and could eventually be used to develop novel behavioral and therapeutic interventions for improving mental and physical health outcomes in at-risk and vulnerable populations.
The Innovation of Ideological and Political Education Integrating Artificial Intelligence Big Data with the Support of Wireless Network
Published in Applied Artificial Intelligence, 2023
Bias: AI algorithms can be biased if they are trained on biased data, leading to unfair or discriminatory results.Interpretability: Some AI algorithms can be difficult to interpret, making it difficult to understand how they arrive at their conclusions.Lack of context: AI algorithms may not be able to take into account the broader context of a problem, leading to errors or incorrect conclusions.Dependence on data quality: AI algorithms depend on high-quality data to achieve accurate results, and low-quality data can lead to poor performance.
Transparency and trust in artificial intelligence systems
Published in Journal of Decision Systems, 2020
Philipp Schmidt, Felix Biessmann, Timm Teubner
Most recently, the Machine Learning community as well as the public debate have turned towards the comprehensibility of an AI’s decisions, including aspects such as transparency, traceability, and hence interpretability (Grzymek & Puntschuh, 2019; Koene et al., 2019; Rohde, 2018). The underlying idea behind this stream of research is often to foster trust in AI systems by rendering them more transparent, often referred to as explainable AI, or XAI. While a substantial body of literature is dedicated to new methods of interpretability (Guidotti et al., 2018; Samek, 2019), the relationship between trust and the transparency of an AI’s decision remains underrepresented in this research. This is why the driving and inhibiting factors of trust into AI are yet to be better understood. Specifically, we argue that there are cases in which transparency can actually have a detrimental impact on trust into AI. This can, in turn, lead to suboptimal usage of AI, with potential ramification for decision makers employing such technology but, at least equally importantly, also for persons affected by the outcomes (e.g. patients, defendants in trial; (Yong, 2018)). In this paper, we thus consider the following overarching research question: RQ: How does insight into a ML-based decision support tool affect human decision makers’ trust in its predictions?
Adding Interpretability to Neural Knowledge DNA
Published in Cybernetics and Systems, 2022
Junjie Xiao, Tao Liu, Haoxi Zhang, Edward Szczerbicki
Consequently, the interpretability of the AI algorithm has become an urgent problem (Tjoa and Guan 2020): who is responsible if there is something wrong? Can we explain why things go wrong? If everything goes well, do we know why and how to leverage them further? Many papers have suggested different methods and frameworks for achieving interpretability, and explainable artificial intelligence (XAI) is now a hot topic in AI research. Moreover, the introduction of interpretability evaluation criteria (such as causality, availability, and reliability) enables the AI researchers and engineers to track the logic and decision-making procedures of the algorithms and provide guidance for further improvement and development of AI systems (Tonekaboni et al. 2019).