Explore chapters and articles related to this topic
Learning Probabilistic Networks from Data
Published in Takushi Tanaka, Setsuo Ohsuga, Moonis Ali, Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 2022
A Bayesian network can be transformed into an equivalent Markov network by including so called moral edges and dropping directions. A moral edge is an edge obtained by connecting two unconnected parent nodes that have a common child node. For example, by connecting the vertices X2 and X4 in Figure 1(a) we obtain the moral edge The moral graph can then be transformed into a triangulated graph by including fill-in edges so that the condition for a triangulated graph is satisfied. The key property of a probabilistic network is that the missing links can be interpreted in terms of conditional independence. We say that two variables X and Y are conditionally independent given the variable Z if P(x|y, z) = P(x|z), where x, y, z denote all possible values of X, Y, Z, respectively. In a probabilistic network, a missing link between two vertices Xi and Xj implies that Xi and Xj are conditionally independent given some subset of the remaining variables.
Signature Generation Algorithms for Polymorphic Worms
Published in Mohssen Mohammed, Al-Sakib Khan Pathan, Automatic Defense Against Zero-day Polymorphic Worms in Communication Networks, 2016
Mohssen Mohammed, Al-Sakib Khan Pathan
The main statistical property represented explicitly by the graph is conditional independence between variables. We say that X and Y are conditionally independent given Z if P(X,Y|Z) = P(X|Z)P(Y|Z) for all values of the variables X, Y, and Z where these quantities are defined [i.e., except settings z where P(Z = z) = 0]. We use the notation X⊥Y|Z to denote the conditional independence relation. Conditional independence generalizes to sets of variables in the obvious way, and it is different from marginal independence, which states that P(X,Y) = P(X)P(Y) and is denoted X⊥Y.
Sensor- and Recognition-Based Input for Interaction
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
Often, there are advantages in treating some subset of the variables as conditionally independent from others. For example, a full joint probability distribution can require a lot of data to train; there may be clear constraints from the application that imply conditional independence, and there may be some subset of the variables that are most effectively modeled with one technique while the rest are best modeled with another. In this case, it may be helpful to selectively apply conditional independence to break the problem into smaller pieces. For example, we might take P(xfC) 5 P(x1,x2fC) P(x3fC) for a 3D feature space, model P(x1,x2fC) with a mixture of Gaussians, and P(x3fC) as a histogram. This overall model amounts to an assertion of condition independence between x3 and the joint space of x1 and x2.
A sparse expansion for deep Gaussian processes
Published in IISE Transactions, 2023
Liang Ding, Rui Tuo, Shahin Shahrampour
In this work, we consider DGPs with Markov structure and, based on the Markov structure, we design an accurate and efficient sparse expansion for DGPs. Sidén and Lindsten (2020) propose compositions of discrete Gaussian Markov random field on a graph and called deep graphical models of this structure a Deep Gaussian Markov Random Field (DGMRF). We must point out that our DTMGP is essentially different from a DGMRF because DTMGPs are not deep graphical models. More specifically, DGMRF treats every activation as a random variable and imposes a graphical Markov structure on these activations, i.e., any two non-adjacent randomly distributed activations are conditionally independent given all other activations. On the other hand, every activation in DTMGP is one of the orthonormal basis functions of a continuous Gaussian Markov random field and these orthonormal basis functions constitute a so-called hierarchical expansion of the field.
Analysis of risk factors affecting delay of high-speed railway in China based on Bayesian network modeling
Published in Journal of Transportation Safety & Security, 2022
Jing Wang, Yichuan Peng, Jian Lu, Yuming Jiang
Geiger and Pearl (1993) proved that all relationships of conditional independence can be discovered from topology of a Bayesian network using a method called “directed separation” (D-separation). Zhao et al. (2012) proposed an algorithm to test the conditionally independent relation between a pair of nodes: for each arc between every two nodes in the graph, if there is another path between two nodes, then remove this arc from the graph for the moment and use the algorithm to find the cut-set (Cheng, Greiner, Kelly, Bell, & Liu, 2002) that can D-separate two nodes in the revised graph. Next, use the conditional independence test to test if two nodes are conditionally independent after take consideration of the new cut-set. If two nodes are conditional independent, remove this arc permanently; otherwise, add the arc back to the graph again.
Improving Naive Bayes for Regression with Optimized Artificial Surrogate Data
Published in Applied Artificial Intelligence, 2020
The predictive features and are not conditionally independent given the predictive target , which violates naive Bayes’ conditional independence assumption. Once is known, the potential values for are distributed around the perimeter of a circle centered on the origin. Thus, knowing the value of limits the locations for to two possible values. In contrast to LR, NBR achieves RMSE scores of 1.4 and 2.1 on the two data splits, respectively, which is a major improvement but still not ideal.