Explore chapters and articles related to this topic
Introduction to Artificial Intelligence
Published in Richard E. Neapolitan, Xia Jiang, Artificial Intelligence, 2018
Richard E. Neapolitan, Xia Jiang
Neapolitan [1989] shows that the rule-based representation of uncertain knowledge and reasoning is not only cumbersome and complex, but also does not model how humans reason very well. Pearl [1986] made the more reasonable conjecture that humans identify local probabilistic causal relationships between individual propositions and reason with these relationships. At this same time researchers in decision analysis [Shachter, 1986] were developing influence diagrams, which provide us with a normative decision in the face of uncertainty. In the 1980s, researchers from cognitive science (e.g., Judea Pearl), computer science (e.g., Peter Cheeseman and Lotfi Zadeh), decision analysis (e.g., Ross Shachter), medicine (e.g., David Heckerman and Gregory Cooper), mathematics and statistics (e.g., Richard Neapolitan and David Spiegelhalter) and philosophy (e.g., Henry Kyburg) met at the newly formed Workshop on Uncertainty in Artificial Intelligence (now a conference) to discuss how to best perform uncertain inference in artificial intelligence. The texts Probabilistic Reasoning in Expert Systems [Neapolitan, 1989] and Probabilistic Reasoning in Intelligent Systems [Pearl, 1988] integrated many of the results of these discussions into the field we now call Bayesian networks. Bayesian networks have arguably become the standard for handling uncertain inference in AI, and many AI applications have been developed using them. Section 9.8 lists some of them.
Bayesian network modeling analyzes of perceived urban rail transfer time
Published in Transportation Letters, 2021
Weixin Hua, Xuesong Feng, Chuanchen Ding, Zejing Ruan
For analyzing the MTPT in different seasons, an effective inference approach which takes into account the inherent uncertainty of the MTPT and the comprehensive influences of multiple factors needs to be used. Bayesian network (BN) model, a probabilistic graphical model, is a framework for reasoning under uncertainty (Pearl 1986) and has the advantage of being more intuitively understandable than other uncertain inference methods (e.g. Dempster–Shafer theory (Inagaki 1993), possibilistic logic (Dubois, Prade, and Schockaert 2011), and fuzzy logic (Maalouf et al. 2014)) because its graphical representations and conditional probabilities visualize complicated relationships and interactions among factors. Moreover, BNs allow for incorporation of massive historical data in identifying the contingencies between multiple events and updating the states of different factors given real-time data (Lessan, Fu, and Wen 2018). These features, convoluting different factors and fusing massive data, have given BNs advantages over other network-based models, such as structural equation model, artificial intelligence techniques. Now, BNs have been widely applied in various fields including machine learning (Velikova et al. 2013), signal processing (Martino and Míguez 2010), service quality evaluation (Díez-Mesa, de Oña, and de Oña 2018; Pietro et al. 2017), travel behavior analysis (Chen et al. 2015; Li, Miwa, and Morikawa 2016; Xie and Waller 2010), safety and risk management (Karimnezhad and Moradi 2016; Lessan, Fu, and Wen 2018; Wang and Yang 2018), and so on.