Explore chapters and articles related to this topic
An Unsupervised Parametric Mixture Model for Automatic Cerebrovascular Segmentation
Published in Ayman El-Baz, Jasjit S. Suri, Cardiovascular Imaging and Image Analysis, 2018
Mohammed Ghazal, Yasmina Al Khalil, Ayman El-Baz
To estimate the parameters θk for each given class, we utilize the EM algorithm, which maximizes the likelihood of the distribution for a certain set of data [41]. The EM algorithm estimates the distribution parameters by updating initial parameter estimates iteratively, with the aim of minimizing the difference between log-likelihood of the mixture distribution. Initial parameter values are either found manually or by a separate initialization procedure. Given the number of distributions or classes M, relative contribution or responsibility of each intensity level q toward the Gaussian distribution for each iteration n is given by
Traffic Load Modelling for Urban Highway Bridges using Weigh-in-Motion Data
Published in Nigel Powers, Dan M. Frangopol, Riadh Al-Mahaidi, Colin Caprani, Maintenance, Safety, Risk, Management and Life-Cycle Performance of Bridges, 2018
T.L. Huang, J.J. Liao, J. Zhong, J.W. Zhong, H.P. Chen
where μX is the mean matrix, CX is the covariance matrix. The joint probability density function (PDF) of this Gaussian random process is determined by its mean μX (ti), (i = 1,2,···, n) and covariance CX (ti,tj),(i, j = 1,2, ···, n). In cases where each of the underlying random variables X(t),X(t2),···,X(tn) is continuous, the outcome variable X(t) will also be continuous and its joint PDF is sometimes referred to as a mixture PDF. The cumulative distribution function (CDF) and the PDF if it exists can be expressed as a convex combination (i.e. a weighted sum, with non-negative weights that sum to 1) of all individual CDFs and PDFs. These individual distributions that are combined to form the mixture distribution are called the mixture components, and the weights associated with each component are called the mixture weights. The number of components in mixture distribution is often restricted to being finite.
A study on wind speed probability distribution models used in wind energy analysis
Published in Lin Liu, Automotive, Mechanical and Electrical Engineering, 2017
Y.D. Li, Y.T. Sun, K. Li, M. Liu
In the past two decades, researchers have been devoted to developing a large number of statistical models to describe wind speed probability distribution. Three kinds of models are widely used in the engineering, namely one-component distribution model, mixture distribution model and Kernel Density Estimation (KDE) model. The one-component distribution model is a kind of model composed of one probability density function. A mixture distribution function is a mixture of two or several component distributions. A KDE model is a non-parametric density estimation methods which does not make any assumptions on the underlying wind speed distributions, and has applications in load uncertainty analysis (Zhao Y. & Zhang X. & Zhou J., 2010) and renewable energy sources study (Yan W. et al., 2013; Qin Z. & Li W. & Xiong X., 2011).
Chance constrained programs with Gaussian mixture models
Published in IISE Transactions, 2022
Zhaolin Hu, Wenjie Sun, Shushang Zhu
Nowadays we are witnessing an era of data. In many real SO problems, we can obtain data for the random parameters. Thus, it is often beneficial to mine the data to specify the input distribution for SO. This kind of data-driven approach is very natural and attractive. In this article we adopt such an input modeling approach for CCPs. We propose to use mixture distributions (also called mixture models) to model the randomness of the parameters. Note that a mixture distribution is a weighted sum of a finite set of probability measures. In contrast with the parametric approach and the nonparametric approach, the use of a mixture distribution can be viewed as a semiparametric approach (McLachlan and Peel, 2000; Bishop, 2006). The idea of using a mixture distribution can be dated back to Pearson (1894), who proposed to use a mixture of two normal distributions to fit a set of asymmetric biological data. Pearson’s approach led to new findings in the biological area. Mixture models allow for sufficient flexibility (see, e.g., McLachlan and Peel (2000) and Frühwirth-Schnatter (2006)). They provide a framework to construct more complex distributions. For instance, asymmetrical distributions and skewed distributions can often be closely approximated by mixture models (see, e.g., Wang and Taaffe, 2015). Among various mixture distributions, we are particularly interested in the Gaussian Mixture Model (GMM), which has been studied extensively in the literature, and has been used widely in various areas, including economics, finance, engineering and social sciences.
Reliable Post-Signal Fault Diagnosis for Correlated High-Dimensional Data Streams
Published in Technometrics, 2022
Dongdong Xiang, Peihua Qiu, Dezhi Wang, Wendong Li
The observed data are assumed to be a normal mixture of two hidden patterns, with one for the IC data streams with the normal distribution and the other for the OC data streams with the distribution . The hidden states of the data streams are assumed to form a Markov chain. The HMM parameters are estimated by using the EM algorithm. The estimation results with n = 20 are summarized in Table 4, where indicates that a positive correlation exists among the hidden statuses, and that the OC streams tend to appear in clusters. It should be noted that if follows an HMM-based mixture distribution, then the estimation results in Table 4 by using the EM algorithm should be close to the true underlying distribution. Motivated by this intuition, we performed the chi-squared goodness-of-fit test to see whether sample frequencies follow the estimated mixture distribution well. Specifically, sample data are divided into intervals, and the numbers of sample data in individual intervals are compared with the expected numbers of sample data under the HMM model by constructing the chi-square test statistic. The resulting p-value is 0.216 in this case, which implies that the observed data can be described properly by the HMM model.
Design of restricted normalizing flow towards arbitrary stochastic policy with computational efficiency
Published in Advanced Robotics, 2023
Taisuke Kobayashi, Takumi Aotani
Mixture distribution, e.g. Gaussian mixture model (GMM), is widely known as the modeling that can represent more complex probability distributions [18,19]. In particular, its multimodality has been shown to be useful for problems that can have multiple solutions [20,21]. Theoretically, having an infinite number of components can guarantee a universal approximation, but for implementation, it must be limited to a finite number. The number of components is difficult to be designed: if the number is too small, the required expressiveness cannot be obtained; and if it is too large, the system becomes redundant, which increases computational cost.