Explore chapters and articles related to this topic
Electroelasticity of dielectric elastomers based on molecular chain statistics
Published in Alexander Lion, Michael Johlitz, Constitutive Models for Rubber X, 2017
M. Itskov, V.N. Khiêm, S. Waluyo
where κ denotes the concentration parameter. This distribution results from the non-Gaussian chain statistics by Kuhn and Grün (Kuhn & Grün 1942). Inserting (4) into (2) we obtain () ℒ(κ)=λr,κ=ℒ−1(λr),
Unsupervised Learning Using Expectation Propagation Inference of Inverted Beta-Liouville Mixture Models for Pattern Recognition Applications
Published in Cybernetics and Systems, 2023
In this section, the finite IBL mixture is extended to the infinite case using an efficient principle based on a nonparametric Dirichlet process (DP) (Teh 2010) which make possible to have an infinite number of components. Thus, the infinite mixture is presented as a nonparametric model where the complexity can raise as new data are coming. To tackle the issue of determining the optimal number of components that fit well observed data (which is intended to be infinite), we propose here a Dirichlet process mixtures of IBL distributions using stick-breaking representation (Sethuraman 1994; Ferguson 1983). If we have a Dirichlet process G with a base distribution H and a concentration parameter ψ, then is given as
Decomposition methods for solving Markov decision processes with multiple models of the parameters
Published in IISE Transactions, 2021
Lauren N. Steimle, Vinayak S. Ahluwalia, Charmee Kamdar, Brian T. Denton
Figure 2 demonstrates how the variance among the models’ parameters can play an important role in the value of solving the WVP relative to the simpler MVP and the value of perfect information. Figure 2(a) shows the VSS for four values of the concentration parameter used in the Dirichlet distribution. We observe that for higher values of concentration parameters (i.e., 10 and 100), the VSS is quite small in all cases, but the VSS can be quite high for cases where there is high variance among the models. This finding suggests that when the different models’ have parameters that are similar to each other the MVP is a suitable approximation and that the PB-B&B algorithm is spending its time proving the optimality of the MVP solution. However, as the variance among the models increases, the VSS tends to increase. Figure 2(b) demonstrates how variance among the model parameters can influence EVPI. We observe that EVPI is also decreasing as the models’ parameters become more closely concentrated. We also observe that EVPI can still be considerable even for larger concentration parameters.
Detection and clustering of mixed-type defect patterns in wafer bin maps
Published in IISE Transactions, 2018
Jinho Kim, Youngmin Lee, Heeyoung Kim
For the infinite mixture modeling in the latent space, we use a Dirichlet process (DP) as a prior over the parameters of mixture components. Let Θ be the space of the cluster parameters, and let G be a probability distribution over Θ. A DP is a distribution over all such distributions. More precisely, it is said that a probability distribution G over Θ follows a DP with a concentration parameter α and a base measure (or base distribution) H, denoted by G ∼ DP(α, H), if the probabilities that G assigns to any finite partition (T1, …, TK) of Θ follow a Dirichlet distribution: